Sample records for likelihood ratio method

  1. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  2. Measuring coherence of computer-assisted likelihood ratio methods.

    PubMed

    Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H

    2015-04-01

    Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Three methods to construct predictive models using logistic regression and likelihood ratios to facilitate adjustment for pretest probability give similar results.

    PubMed

    Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les

    2008-01-01

    To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.

  4. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  6. Generalizing Terwilliger's likelihood approach: a new score statistic to test for genetic association.

    PubMed

    el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J

    2007-09-24

    In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.

  7. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  8. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  9. A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins

    PubMed Central

    Knudsen, Bjarne; Miyamoto, Michael M.

    2001-01-01

    Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650

  10. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  11. Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography

    PubMed Central

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.

    2014-01-01

    Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303

  12. Mapping Quantitative Traits in Unselected Families: Algorithms and Examples

    PubMed Central

    Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David

    2009-01-01

    Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016

  13. Approximate likelihood approaches for detecting the influence of primordial gravitational waves in cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Pan, Zhen; Anderes, Ethan; Knox, Lloyd

    2018-05-01

    One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.

  14. A scaling transformation for classifier output based on likelihood ratio: Applications to a CAD workstation for diagnosis of breast cancer

    PubMed Central

    Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei

    2012-01-01

    Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651

  15. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  16. Order-restricted inference for means with missing values.

    PubMed

    Wang, Heng; Zhong, Ping-Shou

    2017-09-01

    Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.

  17. Subjective global assessment of nutritional status in children.

    PubMed

    Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool

    2010-10-01

    This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.

  18. Change-in-ratio estimators for populations with more than two subclasses

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1991-01-01

    Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.

  19. Mean cerebral blood volume is an effective diagnostic index of recurrent and radiation injury in glioma patients: A meta-analysis of diagnostic test.

    PubMed

    Li, Zhanzhan; Zhou, Qin; Li, Yanyan; Yan, Shipeng; Fu, Jun; Huang, Xinqiong; Shen, Liangfang

    2017-02-28

    We conducted a meta-analysis to evaluate the diagnostic values of mean cerebral blood volume for recurrent and radiation injury in glioma patients. We performed systematic electronic searches for eligible study up to August 8, 2016. Bivariate mixed effects models were used to estimate the combined sensitivity, specificity, positive likelihood ratios, negative likelihood ratios, diagnostic odds ratios and their 95% confidence intervals (CIs). Fifteen studies with a total number of 576 participants were enrolled. The pooled sensitivity and specificity of diagnostic were 0.88 (95%CI: 0.82-0.92) and 0.85 (95%CI: 0.68-0.93). The pooled positive likelihood ratio is 5.73 (95%CI: 2.56-12.81), negative likelihood ratio is 0.15 (95%CI: 0.10-0.22), and the diagnostic odds ratio is 39.34 (95%CI:13.96-110.84). The summary receiver operator characteristic is 0.91 (95%CI: 0.88-0.93). However, the Deek's plot suggested publication bias may exist (t=2.30, P=0.039). Mean cerebral blood volume measurement methods seems to be very sensitive and highly specific to differentiate recurrent and radiation injury in glioma patients. The results should be interpreted with caution because of the potential bias.

  20. Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios

    PubMed Central

    Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang

    2014-01-01

    Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553

  1. Using DNA fingerprints to infer familial relationships within NHANES III households

    PubMed Central

    Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.

    2009-01-01

    Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713

  2. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  3. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  4. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  5. An Empirical Comparison of DDF Detection Methods for Understanding the Causes of DIF in Multiple-Choice Items

    ERIC Educational Resources Information Center

    Suh, Youngsuk; Talley, Anna E.

    2015-01-01

    This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…

  6. Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.

    PubMed

    Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A

    2013-11-01

    To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.

  7. The diagnostic performance of perfusion MRI for differentiating glioma recurrence from pseudoprogression: A meta-analysis.

    PubMed

    Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo

    2017-03-01

    The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.

  8. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  9. A Note on Three Statistical Tests in the Logistic Regression DIF Procedure

    ERIC Educational Resources Information Center

    Paek, Insu

    2012-01-01

    Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…

  10. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  11. Combining evidence using likelihood ratios in writer verification

    NASA Astrophysics Data System (ADS)

    Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory

    2013-01-01

    Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.

  12. Value of contrast-enhanced ultrasound in differential diagnosis of solid lesions of pancreas (SLP): A systematic review and a meta-analysis.

    PubMed

    Ran, Li; Zhao, Wenli; Zhao, Ye; Bu, Huaien

    2017-07-01

    Contrast-enhanced ultrasound (CEUS) is considered a novel method for diagnosing pancreatic cancer, but currently, there is no conclusive evidence of its accuracy. Using CEUS in discriminating between pancreatic carcinoma and other pancreatic lesions, we aimed to evaluate the diagnostic accuracy of CEUS in predicting pancreatic tumours. Relevant studies were selected from the PubMed, Cochrane Library, Elsevier, CNKI, VIP, and WANFANG databases dating from January 2006 to May 2017. The following terms were used as keywords: "pancreatic cancer" OR "pancreatic carcinoma," "contrast-enhanced ultrasonography" OR "contrast-enhanced ultrasound" OR "CEUS," and "diagnosis." The selection criteria are as follows: pancreatic carcinomas diagnosed by CEUS while the main reference standard was surgical pathology or biopsy (if it involved a clinical diagnosis, particular criteria emphasized); SonoVue or Levovist was the contrast agent; true positive, false positive, false negative, and true negative rates were obtained or calculated to construct the 2 × 2 contingency table; English or Chinese articles; at least 20 patients were enrolled in each group. The Quality Assessment for Studies of Diagnostic Accuracy was employed to evaluate the quality of articles. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, summary receiver-operating characteristic curves, and the area under curve were evaluated to estimate the overall diagnostic efficiency. Pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated with fixed-effect models. Eight of 184 records were eligible for a meta-analysis after independent scrutinization by 2 reviewers. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were 0.86 (95% CI 0.81-0.90), 0.75 (95% CI 0.68-0.82), 3.56 (95% CI 2.64-4.78), 0.19 (95% CI 0.13-0.27), and 22.260 (95% CI 8.980-55.177), respectively. The area under the SROC curve was 0.9088. CEUS has a satisfying pooled sensitivity and specificity for discriminating pancreatic cancer from other pancreatic lesions.

  13. Handwriting individualization using distance and rarity

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Srihari, Sargur; Srinivasan, Harish

    2012-01-01

    Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.

  14. The Sequential Probability Ratio Test and Binary Item Response Models

    ERIC Educational Resources Information Center

    Nydick, Steven W.

    2014-01-01

    The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…

  15. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Justin; Wolpert, David; Neil, Joshua

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  16. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE PAGES

    Grana, Justin; Wolpert, David; Neil, Joshua; ...

    2016-03-11

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  17. Identifying common donors in DNA mixtures, with applications to database searches.

    PubMed

    Slooten, K

    2017-01-01

    Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. [Accuracy of three methods for the rapid diagnosis of oral candidiasis].

    PubMed

    Lyu, X; Zhao, C; Yan, Z M; Hua, H

    2016-10-09

    Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.

  19. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  20. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  1. Using latent class analysis to model prescription medications in the measurement of falling among a community elderly population

    PubMed Central

    2013-01-01

    Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639

  2. Likelihood Ratio Tests for Special Rasch Models

    ERIC Educational Resources Information Center

    Hessen, David J.

    2010-01-01

    In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…

  3. Exclusion probabilities and likelihood ratios with applications to kinship problems.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2014-05-01

    In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.

  4. Inferring relationships between pairs of individuals from locus heterozygosities

    PubMed Central

    Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E

    2002-01-01

    Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003

  5. Likelihood Ratios for the Emergency Physician.

    PubMed

    Peng, Paul; Coyle, Andrew

    2018-04-26

    The concept of likelihood ratios was introduced more than 40 years ago, yet this powerful metric has still not seen wider application or discussion in the medical decision-making process. There is concern that clinicians-in-training are still being taught an over-simplified approach to diagnostic test performance, and have limited exposure to likelihood ratios. Even for those familiar with likelihood ratios, they might perceive them as mathematically-cumbersome in application, if not difficult to determine for a particular disease process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Preoperative Serum Thyrotropin to Thyroglobulin Ratio Is Effective for Thyroid Nodule Evaluation in Euthyroid Patients.

    PubMed

    Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan

    2015-07-01

    This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  7. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  8. Average Likelihood Methods for Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2014-05-01

    lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals

  9. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  10. Two simple clinical tests for predicting onset of medial tibial stress syndrome: shin palpation test and shin oedema test.

    PubMed

    Newman, Phil; Adams, Roger; Waddington, Gordon

    2012-09-01

    To examine the relationship between two clinical test results and future diagnosis of (Medial Tibial Stress Syndrome) MTSS in personnel at a military trainee establishment. Data from a preparticipation musculoskeletal screening test performed on 384 Australian Defence Force Academy Officer Cadets were compared against 693 injuries reported by 326 of the Officer Cadets in the following 16 months. Data were held in an Injury Surveillance database and analysed using χ² and Fisher's Exact tests, and Receiver Operating Characteristic Curve analysis. Diagnosis of MTSS, confirmed by an independent blinded health practitioner. Both the palpation and oedema clinical tests were each found to be significant predictors for later onset of MTSS. Specifically: Shin palpation test OR 4.63, 95% CI 2.5 to 8.5, Positive Likelihood Ratio 3.38, Negative Likelihood Ratio 0.732, Pearson χ² p<0.001; Shin oedema test OR 76.1 95% CI 9.6 to 602.7, Positive Likelihood Ratio 7.26, Negative Likelihood Ratio 0.095, Fisher's Exact p<0.001; Combined Shin Palpation Test and Shin Oedema Test Positive Likelihood Ratio 7.94, Negative Likelihood Ratio <0.001, Fisher's Exact p<0.001. Female gender was found to be an independent risk factor (OR 2.97, 95% CI 1.66 to 5.31, Positive Likelihood Ratio 2.09, Negative Likelihood Ratio 0.703, Pearson χ² p<0.001) for developing MTSS. The tests for MTSS employed here are components of a normal clinical examination used to diagnose MTSS. This paper confirms that these tests and female gender can also be confidently applied in predicting those in an asymptomatic population who are at greater risk of developing MTSS symptoms with activity at some point in the future.

  11. The likelihood ratio as a random variable for linked markers in kinship analysis.

    PubMed

    Egeland, Thore; Slooten, Klaas

    2016-11-01

    The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.

  12. Ab initio solution of macromolecular crystal structures without direct methods.

    PubMed

    McCoy, Airlie J; Oeffner, Robert D; Wrobel, Antoni G; Ojala, Juha R M; Tryggvason, Karl; Lohkamp, Bernhard; Read, Randy J

    2017-04-04

    The majority of macromolecular crystal structures are determined using the method of molecular replacement, in which known related structures are rotated and translated to provide an initial atomic model for the new structure. A theoretical understanding of the signal-to-noise ratio in likelihood-based molecular replacement searches has been developed to account for the influence of model quality and completeness, as well as the resolution of the diffraction data. Here we show that, contrary to current belief, molecular replacement need not be restricted to the use of models comprising a substantial fraction of the unknown structure. Instead, likelihood-based methods allow a continuum of applications depending predictably on the quality of the model and the resolution of the data. Unexpectedly, our understanding of the signal-to-noise ratio in molecular replacement leads to the finding that, with data to sufficiently high resolution, fragments as small as single atoms of elements usually found in proteins can yield ab initio solutions of macromolecular structures, including some that elude traditional direct methods.

  13. A New Monte Carlo Method for Estimating Marginal Likelihoods.

    PubMed

    Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O

    2018-06-01

    Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.

  14. Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.

    PubMed

    Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S

    2011-02-01

    This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.

  15. NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.

    PubMed

    Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow

    2018-06-01

    DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Bias correction of risk estimates in vaccine safety studies with rare adverse events using a self-controlled case series design.

    PubMed

    Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley

    2013-12-15

    The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.

  17. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  18. Statistical inference methods for sparse biological time series data.

    PubMed

    Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita

    2011-04-25

    Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.

  19. A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.

    PubMed

    Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf

    2017-07-01

    This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.

  20. Comparison of two different size needles in endoscopic ultrasound-guided fine-needle aspiration for diagnosing solid pancreatic lesions

    PubMed Central

    Xu, Mei-Mei; Jia, Hong-Yu; Yan, Li-Li; Li, Shan-Shan; Zheng, Yue

    2017-01-01

    Abstract Background: This meta-analysis aimed to provide a pooled analysis of prospective controlled trials comparing the diagnostic accuracy of 22-G and 25-G needles on endoscopic ultrasonography (EUS-FNA) of the solid pancreatic mass. Methods: We established a rigorous study protocol according to Cochrane Collaboration recommendations. We systematically searched the PubMed and Embase databases to identify articles to include in the meta-analysis. Sensitivity, specificity, and corresponding 95% confidence intervals were calculated for 22-G and 25-G needles of individual studies from the contingency tables. Results: Eleven prospective controlled trials included a total of 837 patients (412 with 22-G vs 425 with 25-G). Our outcomes revealed that 25-G needles (92% [95% CI, 89%–95%]) have higher sensitivity than 22-G needles (88% [95% CI, 84%–91%]) on solid pancreatic mass EUS-FNA (P = 0.046). However, there were no significant differences between the 2 groups in overall diagnostic specificity (P = 0.842). The pooled positive and negative likelihood ratio of the 22-G needle were 12.61 (95% CI, 5.65–28.14) and 0.16 (95% CI, 0.12–0.21), respectively. The pooled positive likelihood ratio was 12.61 (95% CI, 5.65–28.14), and the negative likelihood ratio was 0.16 (95% CI, 0.12–0.21) for the 22-G needle. The pooled positive likelihood ratio was 8.44 (95% CI, 3.87–18.42), and the negative likelihood ratio was 0.13 (95% CI, 0.09–0.18) for the 25-G needle. The area under the summary receiver operating characteristic curve was 0.97 for the 22-G needle and 0.96 for the 25-G needle. Conclusion: Compared to the study of 22-G EUS-FNA needles, our study showed that 25-G needles have superior sensitivity in the evaluation of solid pancreatic lesions by EUS–FNA. PMID:28151856

  1. Likelihood ratio-based differentiation of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in patients with sonographically evident diffuse hashimoto thyroiditis: preliminary study.

    PubMed

    Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi

    2012-11-01

    To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.

  2. Statistical methods for analysis of radiation effects with tumor and dose location-specific information with application to the WECARE study of asynchronous contralateral breast cancer

    PubMed Central

    Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.

    2009-01-01

    Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297

  3. Parameter estimation in astronomy through application of the likelihood ratio. [satellite data analysis techniques

    NASA Technical Reports Server (NTRS)

    Cash, W.

    1979-01-01

    Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.

  4. A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics

    DTIC Science & Technology

    2007-05-01

    findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each

  5. Diagnostic Performance of Narrow Band Imaging for Nasopharyngeal Cancer: A Systematic Review and Meta-analysis.

    PubMed

    Sun, Changling; Zhang, Yayun; Han, Xue; Du, Xiaodong

    2018-03-01

    Objective The purposes of this study were to verify the effectiveness of the narrow band imaging (NBI) system in diagnosing nasopharyngeal cancer (NPC) as compared with white light endoscopy. Data Sources PubMed, Cochrane Library, EMBASE, CNKI, and Wan Fang databases. Review Methods Data analyses were performed with Meta-Disc. The updated Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess study quality and potential bias. Publication bias was assessed with a Deeks asymmetry test. The registry number of the protocol published on PROSPERO is CRD42015026244. Results This meta-analysis included 10 studies of 1337 lesions. For NBI diagnosis of NPC, the pooled values were as follows: sensitivity, 0.83 (95% CI, 0.80-0.86); specificity, 0.91 (95% CI, 0.89-0.93); positive likelihood ratio, 8.82 (95% CI, 5.12-15.21); negative likelihood ratio, 0.18 (95% CI, 0.12-0.27); and diagnostic odds ratio, 65.73 (95% CI, 36.74-117.60). The area under the curve was 0.9549. For white light endoscopy in diagnosing NPC, the pooled values were as follows: sensitivity, 0.79 (95% CI, 0.75-0.83); specificity, 0.87 (95% CI, 0.84-0.90); positive likelihood ratio, 5.02 (95% CI, 1.99-12.65); negative likelihood ratio, 0.34 (95% CI, 0.24-0.49); and diagnostic odds ratio, 16.89 (95% CI, 5.98-47.66). The area under the curve was 0.8627. The evaluation of heterogeneity, calculated per the diagnostic odds ratio, gave an I 2 of 0.326. No marked publication bias ( P = .68) existed in this meta-analysis. Conclusion The sensitivity and specificity of NBI for the diagnosis of NPC are similar to those of white light endoscopy, and the potential value of NBI for the diagnosis of NPC needs to be validated further.

  6. Training loads and injury risk in Australian football—differing acute: chronic workload ratios influence match injury risk

    PubMed Central

    Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E

    2017-01-01

    Aims (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Methods Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2–9 days) and 7 chronic time windows (14–35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R2). Results The ratio of moderate speed running workload (18–24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R2=0.79) and in the immediate 2 or 5 days following matches (R2=0.76–0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98–2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Conclusions Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. PMID:27789430

  7. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).

  8. Likelihood ratio meta-analysis: New motivation and approach for an old method.

    PubMed

    Dormuth, Colin R; Filion, Kristian B; Platt, Robert W

    2016-03-01

    A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Interpreting DNA mixtures with the presence of relatives.

    PubMed

    Hu, Yue-Qing; Fung, Wing K

    2003-02-01

    The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.

  10. An empirical likelihood ratio test robust to individual heterogeneity for differential expression analysis of RNA-seq.

    PubMed

    Xu, Maoqi; Chen, Liang

    2018-01-01

    The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Validation of the diagnostic score for acute lower abdominal pain in women of reproductive age.

    PubMed

    Jearwattanakanok, Kijja; Yamada, Sirikan; Suntornlimsiri, Watcharin; Smuthtai, Waratsuda; Patumanond, Jayanton

    2014-01-01

    Background. The differential diagnoses of acute appendicitis obstetrics, and gynecological conditions (OB-GYNc) or nonspecific abdominal pain in young adult females with lower abdominal pain are clinically challenging. The present study aimed to validate the recently developed clinical score for the diagnosis of acute lower abdominal pain in female of reproductive age. Method. Medical records of reproductive age women (15-50 years) who were admitted for acute lower abdominal pain were collected. Validation data were obtained from patients admitted during a different period from the development data. Result. There were 302 patients in the validation cohort. For appendicitis, the score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39. The sensitivity, specificity, and positive likelihood ratio in diagnosis of OB-GYNc were 73.0%, 91.6%, and 8.73, respectively. The areas under the receiver operating curves (ROC), the positive likelihood ratios, for appendicitis and OB-GYNc in the validation data were not significantly different from the development data, implying similar performances. Conclusion. The clinical score developed for the diagnosis of acute lower abdominal pain in female of reproductive age may be applied to guide differential diagnoses in these patients.

  12. An Adjusted Likelihood Ratio Approach Analysing Distribution of Food Products to Assist the Investigation of Foodborne Outbreaks

    PubMed Central

    Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter

    2015-01-01

    In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468

  13. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  14. A LANDSAT study of ephemeral and perennial rangeland vegetation and soils

    NASA Technical Reports Server (NTRS)

    Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.

    1976-01-01

    The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.

  15. Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Zeileis, Achim

    2013-01-01

    The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…

  16. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  17. PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.

    PubMed

    Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan

    2015-05-01

    The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  19. Bayesian analysis of time-series data under case-crossover designs: posterior equivalence and inference.

    PubMed

    Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay

    2013-12-01

    Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.

  20. The Diagnostic Accuracy of Cytology for the Diagnosis of Hepatobiliary and Pancreatic Cancers.

    PubMed

    Al-Hajeili, Marwan; Alqassas, Maryam; Alomran, Astabraq; Batarfi, Bashaer; Basunaid, Bashaer; Alshail, Reem; Alaydarous, Shahad; Bokhary, Rana; Mosli, Mahmoud

    2018-06-13

    Although cytology testing is considered a valuable method to diagnose tumors that are difficult to access such as hepato-biliary-pancreatic (HBP) malignancies, its diagnostic accuracy remains unclear. We therefore aimed to investigate the diagnostic accuracy of cytology testing for HBP tumors. We performed a retrospective study of all cytology samples that were used to confirm radiologically detected HBP tumors between 2002 and 2016. The cytology techniques used in our center included fine needle aspiration (FNA), brush cytology, and aspiration of bile. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios were calculated in comparison to histological confirmation. From a total of 133 medical records, we calculated an overall sensitivity of 76%, specificity of 74%, a negative likelihood ratio of 0.30, and a positive likelihood ratio of 2.9. Cytology was more accurate in diagnosing lesions of the liver (sensitivity 79%, specificity 57%) and biliary tree (sensitivity 100%, specificity 50%) compared to pancreatic (sensitivity 60%, specificity 83%) and gallbladder lesions (sensitivity 50%, specificity 85%). Cytology was more accurate in detecting primary cancers (sensitivity 77%, specificity 73%) when compared to metastatic cancers (sensitivity 73%, specificity 100%). FNA was the most frequently used cytological technique to diagnose HBP lesions (sensitivity 78.8%). Cytological testing is efficient in diagnosing HBP cancers, especially for hepatobiliary tumors. Given its relative simplicity, cost-effectiveness, and paucity of alternative diagnostic methods, cytology should still be considered as a first-line tool for diagnosing HBP malignancies. © 2018 S. Karger AG, Basel.

  1. Prediction of hamstring injury in professional soccer players by isokinetic measurements

    PubMed Central

    Dauty, Marc; Menu, Pierre; Fouasson-Chailloux, Alban; Ferréol, Sophie; Dubois, Charles

    2016-01-01

    Summary Objectives previous studies investigating the ability of isokinetic strength ratios to predict hamstring injuries in soccer players have reported conflicting results. Hypothesis to determine if isokinetic ratios are able to predict hamstring injury occurring during the season in professional soccer players. Study Design case-control study; Level of evidence: 3. Methods from 2001 to 2011, 350 isokinetic tests were performed in 136 professional soccer players at the beginning of the soccer season. Fifty-seven players suffered hamstring injury during the season that followed the isokinetic tests. These players were compared with the 79 uninjured players. The bilateral concentric ratio (hamstring-to-hamstring), ipsilateral concentric ratio (hamstring-to-quadriceps), and mixed ratio (eccentric/concentric hamstring-to-quadriceps) were studied. The predictive ability of each ratio was established based on the likelihood ratio and post-test probability. Results the mixed ratio (30 eccentric/240 concentric hamstring-to-quadriceps) <0.8, ipsilateral ratio (180 concentric hamstring-to-quadriceps) <0.47, and bilateral ratio (60 concentric hamstring-to-hamstring) <0.85 were the most predictive of hamstring injury. The ipsilateral ratio <0.47 allowed prediction of the severity of the hamstring injury, and was also influenced by the length of time since administration of the isokinetic tests. Conclusion isokinetic ratios are useful for predicting the likelihood of hamstring injury in professional soccer players during the competitive season. PMID:27331039

  2. Comparison between transthoracic lung ultrasound and a clinical method in confirming the position of double-lumen tube in thoracic anaesthesia. A pilot study.

    PubMed

    Álvarez-Díaz, N; Amador-García, I; Fuentes-Hernández, M; Dorta-Guerra, R

    2015-01-01

    To compare the ability of lung ultrasound and a clinical method in the confirmation of a selective bronchial intubation by left double-lumen tube in elective thoracic surgery. A prospective and blind, observational study was conducted in the setting of a university hospital operating room assigned for thoracic surgery. A single group of 105 consecutive patients from a total of 130, were included. After blind intubation, the position of the tube was confirmed by clinical and ultrasound assessment. Finally, the fiberoptic bronchoscopy confirmation as a reference standard was used to confirm the position of the tube. Under manual ventilation, by sequentially clamping the tracheal and bronchial limbs of the tube, clinical confirmation was made by auscultation, capnography, visualizing the chest wall expansion, and perceiving the lung compliance in the reservoir bag. Ultrasound confirmation was obtained by visualizing lung sliding, diaphragmatic movements, and the appearance of lung pulse sign. The sensitivity of the clinical method was 84.5%, with a specificity of 41.1%. The positive and negative likelihood ratio was 1.44 and 0.38, respectively. The sensitivity of the ultrasound method was 98.6%, specificity was 52.9%, with a positive likelihood ratio of 2.10 and a negative likelihood ratio of 0.03. Comparisons between the diagnostic performance of the 2 methods were calculated with McNemar's test. There was a significant difference in sensitivity between the ultrasound method and the clinical method (P=.002). Nevertheless, there was no statistically significant difference in specificity between both methods (P=.34). A p value<.01 was considered statistically significant. Lung ultrasound was superior to the clinical method in confirming the adequate position of the left double-lumen tube. On the other hand, in confirming the misplacement of the tube, differences between both methods could not be ensured. Copyright © 2014 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Maximum likelihood estimation of signal-to-noise ratio and combiner weight

    NASA Technical Reports Server (NTRS)

    Kalson, S.; Dolinar, S. J.

    1986-01-01

    An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.

  4. Ultrasound assessment of endometrial cavity in perimenopausal women on oral progesterone for abnormal uterine bleeding: comparison of diagnostic accuracy of imaging with hysteroscopy-guided biopsy.

    PubMed

    Dasgupta, Subhankar; Dasgupta, Shyamal; Sharma, Partha Pratim; Mukherjee, Amitabha; Ghosh, Tarun Kumar

    2011-11-01

    To investigate the effect of oral progesterone on the accuracy of imaging studies performed to detect endometrial pathology in comparison to hysteroscopy-guided biopsy in perimenopausal women on progesterone treatment for abnormal uterine bleeding. The study population comprised of women aged 40-55 years with complaints of abnormal uterine bleeding who were also undergoing oral progesterone therapy. Women with a uterus ≥ 12 weeks' gestation size, previous abnormal endometrial biopsy, cervical lesion on speculum examination, abnormal Pap smear, active pelvic infection, adnexal mass on clinical examination or during ultrasound scan and a positive pregnancy test were excluded. A transvaginal ultrasound followed by saline infusion sonography were done. On the following day, a hysteroscopy followed by a guided biopsy of the endometrium or any endometrial lesion was performed. Comparison between the results of the imaging study with the hysteroscopy and guided biopsy was done. The final analysis included 83 patients. For detection of overall pathology, polyp and fibroid transvaginal ultrasound had a positive likelihood ratio of 1.65, 5.45 and 5.4, respectively, and a negative likelihood ratio of 0.47, 0.6 and 0.43, respectively. For detection of overall pathology, polyp and fibroid saline infusion sonography had a positive likelihood ratio of 4.4, 5.35 and 11.8, respectively, and a negative likelihood ratio of 0.3, 0.2 and 0.15, respectively. In perimenopausal women on oral progesterone therapy for abnormal uterine bleeding, imaging studies cannot be considered as an accurate method for diagnosing endometrial pathology when compared to hysteroscopy and guided biopsy. © 2011 The Authors. Journal of Obstetrics and Gynaecology Research © 2011 Japan Society of Obstetrics and Gynecology.

  5. Detection of abrupt changes in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1984-01-01

    Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.

  6. Investigating Measurement Invariance in Computer-Based Personality Testing: The Impact of Using Anchor Items on Effect Size Indices

    ERIC Educational Resources Information Center

    Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.

    2015-01-01

    A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…

  7. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  8. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  9. The performance of blood pressure-to-height ratio as a screening measure for identifying children and adolescents with hypertension: a meta-analysis.

    PubMed

    Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai

    2016-02-01

    The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.

  10. SEMModComp: An R Package for Calculating Likelihood Ratio Tests for Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Levy, Roy

    2010-01-01

    SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.

  11. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  12. Validation of DNA-based identification software by computation of pedigree likelihood ratios.

    PubMed

    Slooten, K

    2011-08-01

    Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Diagnostic accuracy of liver fibrosis based on red cell distribution width (RDW) to platelet ratio with fibroscan in chronic hepatitis B

    NASA Astrophysics Data System (ADS)

    Sembiring, J.; Jones, F.

    2018-03-01

    Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.

  14. Screening for postnatal depression in Chinese-speaking women using the Hong Kong translated version of the Edinburgh Postnatal Depression Scale.

    PubMed

    Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John

    2013-06-01

    The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  15. Statistical inference for tumor growth inhibition T/C ratio.

    PubMed

    Wu, Jianrong

    2010-09-01

    The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.

  16. Clinical Effectiveness of Prospectively Reported Sonographic Twinkling Artifact for the Diagnosis of Renal Calculus in Patients Without Known Urolithiasis.

    PubMed

    Masch, William R; Cohan, Richard H; Ellis, James H; Dillman, Jonathan R; Rubin, Jonathan M; Davenport, Matthew S

    2016-02-01

    The purpose of this study was to determine the clinical effectiveness of prospectively reported sonographic twinkling artifact for the diagnosis of renal calculus in patients without known urolithiasis. All ultrasound reports finalized in one health system from June 15, 2011, to June 14, 2014, that contained the words "twinkle" or "twinkling" in reference to suspected renal calculus were identified. Patients with known urolithiasis or lack of a suitable reference standard (unenhanced abdominal CT with ≤ 2.5-mm slice thickness performed ≤ 30 days after ultrasound) were excluded. The sensitivity, specificity, and positive likelihood ratio of sonographic twinkling artifact for the diagnosis of renal calculus were calculated by renal unit and stratified by two additional diagnostic features for calcification (echogenic focus, posterior acoustic shadowing). Eighty-five patients formed the study population. Isolated sonographic twinkling artifact had sensitivity of 0.78 (82/105), specificity of 0.40 (26/65), and a positive likelihood ratio of 1.30 for the diagnosis of renal calculus. Specificity and positive likelihood ratio improved and sensitivity declined when the following additional diagnostic features were present: sonographic twinkling artifact and echogenic focus (sensitivity, 0.61 [64/105]; specificity, 0.65 [42/65]; positive likelihood ratio, 1.72); sonographic twinkling artifact and posterior acoustic shadowing (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81); all three features (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81). Isolated sonographic twinkling artifact has a high false-positive rate (60%) for the diagnosis of renal calculus in patients without known urolithiasis.

  17. Defining thresholds of specific IgE levels to grass pollen and birch pollen allergens improves clinical interpretation.

    PubMed

    Van Hoeyveld, Erna; Nickmans, Silvie; Ceuppens, Jan L; Bossuyt, Xavier

    2015-10-23

    Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was <0.03 for specific IgE <0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, >6.3 for specific IgE>0.7, and very high (∞) for specific IgE >3.5 kU/L. Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Branching-ratio approximation for the self-exciting Hawkes process

    NASA Astrophysics Data System (ADS)

    Hardiman, Stephen J.; Bouchaud, Jean-Philippe

    2014-12-01

    We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.

  19. A quantum framework for likelihood ratios

    NASA Astrophysics Data System (ADS)

    Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.

    The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.

  20. Likelihood ratio decisions in memory: three implied regularities.

    PubMed

    Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T

    2009-06-01

    We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.

  1. A parimutuel gambling perspective to compare probabilistic seismicity forecasts

    NASA Astrophysics Data System (ADS)

    Zechar, J. Douglas; Zhuang, Jiancang

    2014-10-01

    Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.

  2. Empirical likelihood method for non-ignorable missing data problems.

    PubMed

    Guan, Zhong; Qin, Jing

    2017-01-01

    Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.

  3. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  4. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  5. Likelihood-Ratio DIF Testing: Effects of Nonnormality

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…

  6. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  7. Examining the types and payments of the disabilities of the insurants in the National Farmers' Health Insurance program in Taiwan.

    PubMed

    Wang, Jiun-Hao; Chang, Hung-Hao

    2010-10-26

    In contrast to the considerable body of literature concerning the disabilities of the general population, little information exists pertaining to the disabilities of the farm population. Focusing on the disability issue to the insurants in the Farmers' Health Insurance (FHI) program in Taiwan, this paper examines the associations among socio-demographic characteristics, insured factors, and the introduction of the national health insurance program, as well as the types and payments of disabilities among the insurants. A unique dataset containing 1,594,439 insurants in 2008 was used in this research. A logistic regression model was estimated for the likelihood of received disability payments. By focusing on the recipients, a disability payment and a disability type equation were estimated using the ordinary least squares method and a multinomial logistic model, respectively, to investigate the effects of the exogenous factors on their received payments and the likelihood of having different types of disabilities. Age and different job categories are significantly associated with the likelihood of receiving disability payments. Compared to those under age 45, the likelihood is higher among recipients aged 85 and above (the odds ratio is 8.04). Compared to hired workers, the odds ratios for self-employed and spouses of farm operators who were not members of farmers' associations are 0.97 and 0.85, respectively. In addition, older insurants are more likely to have eye problems; few differences in disability types are related to insured job categories. Results indicate that older farmers are more likely to receive disability payments, but the likelihood is not much different among insurants of various job categories. Among all of the selected types of disability, a highest likelihood is found for eye disability. In addition, the introduction of the national health insurance program decreases the likelihood of receiving disability payments. The experience in Taiwan can be valuable for other countries that are in an initial stage to implement a universal health insurance program.

  8. Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism.

    PubMed

    Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G

    2006-02-10

    Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism.

  9. Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism

    PubMed Central

    Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G

    2006-01-01

    Background Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. Results A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Conclusion Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism. PMID:16472391

  10. Bayesian Hierarchical Random Effects Models in Forensic Science.

    PubMed

    Aitken, Colin G G

    2018-01-01

    Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.

  11. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  12. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  13. New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0.

    PubMed

    Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier

    2010-05-01

    PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.

  14. Exclusion probabilities and likelihood ratios with applications to mixtures.

    PubMed

    Slooten, Klaas-Jan; Egeland, Thore

    2016-01-01

    The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.

  15. Variance change point detection for fractional Brownian motion based on the likelihood ratio test

    NASA Astrophysics Data System (ADS)

    Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz

    2018-01-01

    Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.

  16. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  17. Model-Free CUSUM Methods for Person Fit

    ERIC Educational Resources Information Center

    Armstrong, Ronald D.; Shi, Min

    2009-01-01

    This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…

  18. Validation of the portable Air-Smart Spirometer

    PubMed Central

    Núñez Fernández, Marta; Pallares Sanmartín, Abel; Mouronte Roibas, Cecilia; Cerdeira Domínguez, Luz; Botana Rial, Maria Isabel; Blanco Cid, Nagore; Fernández Villar, Alberto

    2018-01-01

    Background The Air-Smart Spirometer is the first portable device accepted by the European Community (EC) that performs spirometric measurements by a turbine mechanism and displays the results on a smartphone or a tablet. Methods In this multicenter, descriptive and cross-sectional prospective study carried out in 2 hospital centers, we compare FEV1, FVC, FEV1/FVC ratio measured with the Air Smart-Spirometer device and a conventional spirometer, and analyze the ability of this new portable device to detect obstructions. Patients were included for 2 consecutive months. We calculate sensitivity, specificity, positive and negative predictive value (PPV and NPV) and likelihood ratio (LR +, LR-) as well as the Kappa Index to evaluate the concordance between the two devices for the detection of obstruction. The agreement and relation between the values of FEV1 and FVC in absolute value and the FEV1/FVC ratio measured by both devices were analyzed by calculating the intraclass correlation coefficient (ICC) and the Pearson correlation coefficient (r) respectively. Results 200 patients (100 from each center) were included with a mean age of 57 (± 14) years, 110 were men (55%). Obstruction was detected by conventional spirometry in 73 patients (40.1%). Using a FEV1/FVC ratio smaller than 0.7 to detect obstruction with the Air Smart-Spirometer, the kappa index was 0.88, sensitivity (90.4%), specificity (97.2%), PPV (95.7%), NPV (93.7%), positive likelihood ratio (32.29), and negative likelihood ratio (0.10). The ICC and r between FEV1, FVC, and FEV1 / FVC ratio measured by the Air Smart Spirometer and the conventional spirometer were all higher than 0.94. Conclusion The Air-Smart Spirometer is a simple and very precise instrument for detecting obstructive airway diseases. It is easy to use, which could make it especially useful non-specialized care and in other areas. PMID:29474502

  19. Neutron Tomography of a Fuel Cell: Statistical Learning Implementation of a Penalized Likelihood Method

    NASA Astrophysics Data System (ADS)

    Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.

    2013-10-01

    At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.

  20. Interpretation of diagnostic data: 5. How to do it with simple maths.

    PubMed

    1983-11-01

    The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.

  1. Interpretation of diagnostic data: 5. How to do it with simple maths.

    PubMed Central

    1983-01-01

    The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182

  2. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    ERIC Educational Resources Information Center

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  3. Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.

    PubMed

    Ranganathan, Priya; Aggarwal, Rakesh

    2018-01-01

    Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.

  4. Screening for Wilson disease in acute liver failure: a comparison of currently available diagnostic tests.

    PubMed

    Korman, Jessica D; Volenberg, Irene; Balko, Jody; Webster, Joe; Schiodt, Frank V; Squires, Robert H; Fontana, Robert J; Lee, William M; Schilsky, Michael L

    2008-10-01

    Acute liver failure (ALF) due to Wilson disease (WD) is invariably fatal without emergency liver transplantation. Therefore, rapid diagnosis of WD should aid prompt transplant listing. To identify the best method for diagnosis of ALF due to WD (ALF-WD), data and serum were collected from 140 ALF patients (16 with WD), 29 with other chronic liver diseases and 17 with treated chronic WD. Ceruloplasmin (Cp) was measured by both oxidase activity and nephelometry and serum copper levels by atomic absorption spectroscopy. In patients with ALF, a serum Cp <20 mg/dL by the oxidase method provided a diagnostic sensitivity of 21% and specificity of 84% while, by nephelometry, a sensitivity of 56% and specificity of 63%. Serum copper levels exceeded 200 microg/dL in all ALF-WD patients measured (13/16), but were also elevated in non-WD ALF. An alkaline phosphatase (AP) to total bilirubin (TB) ratio <4 yielded a sensitivity of 94%, specificity of 96%, and a likelihood ratio of 23 for diagnosing fulminant WD. In addition, an AST:ALT ratio >2.2 yielded a sensitivity of 94%, a specificity of 86%, and a likelihood ratio of 7 for diagnosing fulminant WD. Combining the tests provided a diagnostic sensitivity and specificity of 100%. Conventional WD testing utilizing serum ceruloplasmin and/or serum copper levels are less sensitive and specific in identifying patients with ALF-WD than other available tests. More readily available laboratory tests including alkaline phosphatase, bilirubin and serum aminotransferases by contrast provides the most rapid and accurate method for diagnosis of ALF due to WD.

  5. Two models for evaluating landslide hazards

    USGS Publications Warehouse

    Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.

    2006-01-01

    Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.

  6. An ERTS-1 investigation for Lake Ontario and its basin

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.

    1975-01-01

    The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.

  7. The effect of rare variants on inflation of the test statistics in case-control analyses.

    PubMed

    Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P

    2015-02-20

    The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.

  8. Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.

    PubMed

    Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter

    2005-05-17

    Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.

  9. Evidence and Clinical Trials.

    NASA Astrophysics Data System (ADS)

    Goodman, Steven N.

    1989-11-01

    This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.

  10. On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai

    2007-01-01

    In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…

  11. An Evaluation of Statistical Strategies for Making Equating Function Selections. Research Report. ETS RR-08-60

    ERIC Educational Resources Information Center

    Moses, Tim

    2008-01-01

    Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…

  12. Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes

    ERIC Educational Resources Information Center

    Leite, Walter L.; Stapleton, Laura M.

    2011-01-01

    In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…

  13. [Waist-to-height ratio is an indicator of metabolic risk in children].

    PubMed

    Valle-Leal, Jaime; Abundis-Castro, Leticia; Hernández-Escareño, Juan; Flores-Rubio, Salvador

    2016-01-01

    Abdominal fat, particularly visceral, is associated with a high risk of metabolic complications. The waist-height ratio (WHtR) is used to assess abdominal fat in individuals of all ages. To determine the ability of the waist-to-height ratio to detect metabolic risk in mexican schoolchildren. A study was conducted on children between 6 and 12 years. Obesity was diagnosed as a body mass index (BMI) ≥ 85th percentile, and an ICE ≥0.5 was considered abdominal obesity. Blood levels of glucose, cholesterol and triglycerides were measured. The sensitivity, specificity, positive predictive and negative value, area under curve, the positive likelihood ratio and negative likelihood ratio of the WHtR and BMI were calculated in order to identify metabolic alterations. WHtR and BMI were compared to determine which had the best diagnostic efficiency. Of the 223 children included in the study, 51 had hypertriglyceridaemia, 27 with hypercholesterolaemia, and 9 with hyperglycaemia. On comparing the diagnostic efficiency of WHtR with that of BMI, there was a sensitivity of 100% vs. 56% for hyperglycaemia, 93 vs. 70% for cholesterol, and 76 vs. 59% for hypertriglyceridaemia. The specificity, negative predictive value, positive predictive value, positive likelihood ratio, negative likelihood ratio, and area under curve were also higher for WHtR. The WHtR is a more efficient indicator than BMI in identifying metabolic risk in mexican school-age. Copyright © 2015 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Resolving the false-negative issues of the nonpolar organic amendment in whole-sediment toxicity identification evaluations.

    PubMed

    Mehler, W Tyler; Keough, Michael J; Pettigrove, Vincent

    2018-04-01

    Three common false-negative scenarios have been encountered with amendment addition in whole-sediment toxicity identification evaluations (TIEs): dilution of toxicity by amendment addition (i.e., not toxic enough), not enough amendment present to reduce toxicity (i.e., too toxic), and the amendment itself elicits a toxic response (i.e., secondary amendment effect). One such amendment in which all 3 types of false-negatives have been observed is with the nonpolar organic amendment (activated carbon or powdered coconut charcoal). The objective of the present study was to reduce the likelihood of encountering false-negatives with this amendment and to increase the value of the whole-sediment TIE bioassay. To do this, the present study evaluated the effects of various activated carbon additions to survival, growth, emergence, and mean development rate of Chironomus tepperi. Using this information, an alternative method for this amendment was developed which utilized a combination of multiple amendment addition ratios based on wet weight (1%, lower likelihood of the secondary amendment effect; 5%, higher reduction of contaminant) and nonconventional endpoints (emergence, mean development rate). This alternative method was then validated in the laboratory (using spiked sediments) and with contaminated field sediments. Using these multiple activated carbon ratios in combination with additional endpoints (namely, emergence) reduced the likelihood of all 3 types of false-negatives and provided a more sensitive evaluation of risk. Environ Toxicol Chem 2018;37:1219-1230. © 2017 SETAC. © 2017 SETAC.

  15. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  16. A Study of Dim Object Detection for the Space Surveillance Telescope

    DTIC Science & Technology

    2013-03-21

    ENG-13-M-32 Abstract Current methods of dim object detection for space surveillance make use of a Gaussian log-likelihood-ratio-test-based...quantitatively comparing the efficacy of two methods for dim object detection , termed in this paper the point detector and the correlator, both of which rely... applications . It is used in national defense for detecting satellites. It is used to detecting space debris, which threatens both civilian and

  17. Signal-to-noise ratio estimation in digital computer simulation of lowpass and bandpass systems with applications to analog and digital communications, volume 3

    NASA Technical Reports Server (NTRS)

    Tranter, W. H.; Turner, M. D.

    1977-01-01

    Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.

  18. Accounting for informatively missing data in logistic regression by means of reassessment sampling.

    PubMed

    Lin, Ji; Lyles, Robert H

    2015-05-20

    We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Noncentral Chi-Square versus Normal Distributions in Describing the Likelihood Ratio Statistic: The Univariate Case and Its Multivariate Implication

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai

    2008-01-01

    In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…

  20. Comparative study of diagnostic accuracy of established PCR assays and in-house developed sdaA PCR method for detection of Mycobacterium tuberculosis in symptomatic patients with pulmonary tuberculosis.

    PubMed

    Nimesh, Manoj; Joon, Deepali; Pathak, Anil Kumar; Saluja, Daman

    2013-11-01

    Indian contribution to global burden of tuberculosis is about 26%. In the present study we have developed an in-house PCR assay using primers for sdaA gene of Mycobacterium tuberculosis and evaluated against already established primers devR, IS6110, MPB64, rpoB primers for diagnosis of pulmonary tuberculosis. Using universal sample preparation (USP) method, DNA was extracted from sputum specimens of 412 symptomatic patients from Delhi, India. The DNA so extracted was used as template for PCR amplification using primers targeting sdaA, devR, IS6110, MPB64 and rpoB genes. Out of 412, 149 specimens were considered positive based on composite reference standard (CRS) criteria. The in-house designed sdaA PCR showed high specificity (96.5%), the high positive likelihood ratio (28), the high sensitivity (95.9%), and the very low negative likelihood ratio (0.04) in comparison to CRS. Based on our results, the sdaA PCR assay can be considered as one of the most reliable diagnostic tests in comparison to other PCR based detection methods. Copyright © 2013 The British Infection Association. Published by Elsevier Ltd. All rights reserved.

  1. Usefulness of Two Aspergillus PCR Assays and Aspergillus Galactomannan and β-d-Glucan Testing of Bronchoalveolar Lavage Fluid for Diagnosis of Chronic Pulmonary Aspergillosis

    PubMed Central

    Urabe, Naohisa; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae

    2017-01-01

    ABSTRACT We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. PMID:28330887

  2. Lead isotope ratios for bullets, forensic evaluation in a Bayesian paradigm.

    PubMed

    Sjåstad, Knut-Endre; Lucy, David; Andersen, Tom

    2016-01-01

    Forensic science is a discipline concerned with collection, examination and evaluation of physical evidence related to criminal cases. The results from the activities of the forensic scientist may ultimately be presented to the court in such a way that the triers of fact understand the implications of the data. Forensic science has been, and still is, driven by development of new technology, and in the last two decades evaluation of evidence based on logical reasoning and Bayesian statistic has reached some level of general acceptance within the forensic community. Tracing of lead fragments of unknown origin to a given source of ammunition is a task that might be of interest for the Court. Use of data from lead isotope ratios analysis interpreted within a Bayesian framework has shown to be suitable method to guide the Court to draw their conclusion for such task. In this work we have used isotopic composition of lead from small arms projectiles (cal. .22) and developed an approach based on Bayesian statistics and likelihood ratio calculation. The likelihood ratio is a single quantity that provides a measure of the value of evidence that can be used in the deliberation of the court. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun

    2017-01-01

    The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…

  4. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  5. Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis; Gold, Dara

    2013-01-01

    We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.

  6. Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.

    PubMed

    Rottman, Benjamin Margolin

    2017-02-01

    Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.

  7. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  8. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  9. Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence

    NASA Astrophysics Data System (ADS)

    Lewis, Nicholas; Grünwald, Peter

    2018-03-01

    Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).

  10. Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models

    PubMed Central

    Hillis, Stephen L.

    2015-01-01

    A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405

  11. Evaluation of prostate cancer antigen 3 for detecting prostate cancer: a systematic review and meta-analysis

    NASA Astrophysics Data System (ADS)

    Cui, Yong; Cao, Wenzhou; Li, Quan; Shen, Hua; Liu, Chao; Deng, Junpeng; Xu, Jiangfeng; Shao, Qiang

    2016-05-01

    Previous studies indicate that prostate cancer antigen 3 (PCA3) is highly expressed in prostatic tumors. However, its clinical value has not been characterized. The aim of this study was to investigate the clinical value of the urine PCA3 test in the diagnosis of prostate cancer by pooling the published data. Clinical trials utilizing the urine PCA3 test for diagnosing prostate cancer were retrieved from PubMed and Embase. A total of 46 clinical trials including 12,295 subjects were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio (+LR), negative likelihood ratio (-LR), diagnostic odds ratio (DOR) and area under the curve (AUC) were 0.65 (95% confidence interval [CI]: 0.63-0.66), 0.73 (95% CI: 0.72-0.74), 2.23 (95% CI: 1.91-2.62), 0.48 (95% CI: 0.44-0.52), 5.31 (95% CI: 4.19-6.73) and 0.75 (95% CI: 0.74-0.77), respectively. In conclusion, the urine PCA3 test has acceptable sensitivity and specificity for the diagnosis of prostate cancer and can be used as a non-invasive method for that purpose.

  12. Modeling and E-M estimation of haplotype-specific relative risks from genotype data for a case-control study of unrelated individuals.

    PubMed

    Stram, Daniel O; Leigh Pearce, Celeste; Bretsky, Phillip; Freedman, Matthew; Hirschhorn, Joel N; Altshuler, David; Kolonel, Laurence N; Henderson, Brian E; Thomas, Duncan C

    2003-01-01

    The US National Cancer Institute has recently sponsored the formation of a Cohort Consortium (http://2002.cancer.gov/scpgenes.htm) to facilitate the pooling of data on very large numbers of people, concerning the effects of genes and environment on cancer incidence. One likely goal of these efforts will be generate a large population-based case-control series for which a number of candidate genes will be investigated using SNP haplotype as well as genotype analysis. The goal of this paper is to outline the issues involved in choosing a method of estimating haplotype-specific risk estimates for such data that is technically appropriate and yet attractive to epidemiologists who are already comfortable with odds ratios and logistic regression. Our interest is to develop and evaluate extensions of methods, based on haplotype imputation, that have been recently described (Schaid et al., Am J Hum Genet, 2002, and Zaykin et al., Hum Hered, 2002) as providing score tests of the null hypothesis of no effect of SNP haplotypes upon risk, which may be used for more complex tasks, such as providing confidence intervals, and tests of equivalence of haplotype-specific risks in two or more separate populations. In order to do so we (1) develop a cohort approach towards odds ratio analysis by expanding the E-M algorithm to provide maximum likelihood estimates of haplotype-specific odds ratios as well as genotype frequencies; (2) show how to correct the cohort approach, to give essentially unbiased estimates for population-based or nested case-control studies by incorporating the probability of selection as a case or control into the likelihood, based on a simplified model of case and control selection, and (3) finally, in an example data set (CYP17 and breast cancer, from the Multiethnic Cohort Study) we compare likelihood-based confidence interval estimates from the two methods with each other, and with the use of the single-imputation approach of Zaykin et al. applied under both null and alternative hypotheses. We conclude that so long as haplotypes are well predicted by SNP genotypes (we use the Rh2 criteria of Stram et al. [1]) the differences between the three methods are very small and in particular that the single imputation method may be expected to work extremely well. Copyright 2003 S. Karger AG, Basel

  13. Using effort information with change-in-ratio data for population estimation

    USGS Publications Warehouse

    Udevitz, Mark S.; Pollock, Kenneth H.

    1995-01-01

    Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.

  14. Maximum likelihood methods for investigating reporting rates of rings on hunter-shot birds

    USGS Publications Warehouse

    Conroy, M.J.; Morgan, B.J.T.; North, P.M.

    1985-01-01

    It is well known that hunters do not report 100% of the rings that they find on shot birds. Reward studies can be used to estimate what this reporting rate is, by comparison of recoveries of rings offering a monetary reward, to ordinary rings. A reward study of American Black Ducks (Anas rubripes) is used to illustrate the design, and to motivate the development of statistical models for estimation and for testing hypotheses of temporal and geographic variation in reporting rates. The method involves indexing the data (recoveries) and parameters (reporting, harvest, and solicitation rates) by geographic and temporal strata. Estimates are obtained under unconstrained (e.g., allowing temporal variability in reporting rates) and constrained (e.g., constant reporting rates) models, and hypotheses are tested by likelihood ratio. A FORTRAN program, available from the author, is used to perform the computations.

  15. Usefulness of Two Aspergillus PCR Assays and Aspergillus Galactomannan and β-d-Glucan Testing of Bronchoalveolar Lavage Fluid for Diagnosis of Chronic Pulmonary Aspergillosis.

    PubMed

    Urabe, Naohisa; Sakamoto, Susumu; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae

    2017-06-01

    We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. Copyright © 2017 American Society for Microbiology.

  16. Multibaseline gravitational wave radiometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit

    2011-03-15

    We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less

  17. Comparison of the diagnostic ability of Moorfield’s regression analysis and glaucoma probability score using Heidelberg retinal tomograph III in eyes with primary open angle glaucoma

    PubMed Central

    Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita

    2010-01-01

    Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832

  18. Extended maximum likelihood halo-independent analysis of dark matter direct detection data

    DOE PAGES

    Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; ...

    2015-11-24

    We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark mattermore » particles with elastic spin-independent interactions and neutron to proton coupling ratio f n/f p=-0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f n/f p=-0.8.« less

  19. On Restructurable Control System Theory

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1983-01-01

    The state of stochastic system and control theory as it impacts restructurable control issues is addressed. The multivariable characteristics of the control problem are addressed. The failure detection/identification problem is discussed as a multi-hypothesis testing problem. Control strategy reconfiguration, static multivariable controls, static failure hypothesis testing, dynamic multivariable controls, fault-tolerant control theory, dynamic hypothesis testing, generalized likelihood ratio (GLR) methods, and adaptive control are discussed.

  20. Validation of the Confusion Assessment Method for the Intensive Care Unit in Older Emergency Department Patients

    PubMed Central

    Han, Jin H.; Wilson, Amanda; Graves, Amy J.; Shintani, Ayumi; Schnelle, John F.; Dittus, Robert S.; Powers, James S.; Vernon, John; Storrow, Alan B.; Ely, E. Wesley

    2014-01-01

    Objectives In the emergency department (ED), health care providers miss delirium approximately 75% of the time, because they do not routinely screen for this syndrome. The Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) is a brief (<1 minute) delirium assessment that may be feasible for use in the ED. The study objective was to determine its validity and reliability in older ED patients. Methods In this prospective observational cohort study, patients aged 65 years or older were enrolled at an academic, tertiary care ED from July 2009 to February 2012. Research assistants (RAs) and an emergency physician (EP) performed the CAM-ICU. The reference standard for delirium was a comprehensive (~30 minutes) psychiatrist assessment using the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision criteria. All assessments were blinded to each other and were conducted within 3 hours. Sensitivities, specificities, and likelihood ratios were calculated for both the EP and the RAs using the psychiatrist’s assessment as the reference standard. Kappa values between the EP and RAs were also calculated to measure reliability. Results Of 406 patients enrolled, 50 (12.3%) had delirium. The median age was 73.5 years old (interquartile range [IQR] = 69 to 80 years), 202 (49.8%) were female, and 57 (14.0%) were nonwhite. The CAM-ICU’s sensitivities were 72.0% (95% confidence interval [CI] = 58.3% to 82.5%) and 68.0% (95% CI = 54.2% to 79.2%) in the EP and RAs, respectively. The CAM-ICU’s specificity was 98.6% (95% CI = 96.8% to 99.4%) for both raters. The negative likelihood ratios (LR–) were 0.28 (95% CI = 0.18 to 0.44) and 0.32 (95% CI = 0.22 to 0.49) in the EP and RAs, respectively. The positive likelihood ratios (LR+) were 51.3 (95% CI = 21.1 to 124.5) and 48.4 (95% CI = 19.9 to 118.0), respectively. The kappa between the EP and RAs was 0.92 (95% CI = 0.85 to 0.98), indicating excellent interobserver reliability. Conclusions In older ED patients, the CAM-ICU is highly specific, and a positive test is nearly diagnostic for delirium when used by both RAs and EPs. However, the CAM-ICU’s sensitivity was modest, and a negative test decreased the likelihood of delirium by a small amount. The consequences of a false-negative CAM-ICU are unknown and deserve further study. PMID:24673674

  1. A preliminary evaluation of the generalized likelihood ratio for detecting and identifying control element failures in a transport aircraft

    NASA Technical Reports Server (NTRS)

    Bundick, W. T.

    1985-01-01

    The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.

  2. Performance of the likelihood ratio difference (G2 Diff) test for detecting unidimensionality in applications of the multidimensional Rasch model.

    PubMed

    Harrell-Williams, Leigh; Wolfe, Edward W

    2014-01-01

    Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.

  3. Which Statistic Should Be Used to Detect Item Preknowledge When the Set of Compromised Items Is Known?

    PubMed

    Sinharay, Sandip

    2017-09-01

    Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.

  4. Diagnostic Accuracy of Natriuretic Peptides for Heart Failure in Patients with Pleural Effusion: A Systematic Review and Updated Meta-Analysis

    PubMed Central

    Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He

    2015-01-01

    Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood or PF, are effective tools for diagnosis of HF. Additional studies are needed to rigorously evaluate the diagnostic accuracy of PF and blood MR-proANP and BNP for the diagnosis of HF. PMID:26244664

  5. Effectiveness of real-time polymerase chain reaction assay for the detection of Mycobacterium tuberculosis in pathological samples: a systematic review and meta-analysis.

    PubMed

    Babafemi, Emmanuel O; Cherian, Benny P; Banting, Lee; Mills, Graham A; Ngianga, Kandala

    2017-10-25

    Rapid and accurate diagnosis of tuberculosis (TB) is key to manage the disease and to control and prevent its transmission. Many established diagnostic methods suffer from low sensitivity or delay of timely results and are inadequate for rapid detection of Mycobacterium tuberculosis (MTB) in pulmonary and extra-pulmonary clinical samples. This study examined whether a real-time polymerase chain reaction (RT-PCR) assay, with a turn-a-round time of 2 h, would prove effective for routine detection of MTB by clinical microbiology laboratories. A systematic literature search was performed for publications in any language on the detection of MTB in pathological samples by RT-PCR assay. The following sources were used MEDLINE via PubMed, EMBASE, BIOSIS Citation Index, Web of Science, SCOPUS, ISI Web of Knowledge and Cochrane Infectious Diseases Group Specialised Register, grey literature, World Health Organization and Centres for Disease Control and Prevention websites. Forty-six studies met set inclusion criteria. Generated pooled summary estimates (95% CIs) were calculated for overall accuracy and bivariate meta-regression model was used for meta-analysis. Summary estimates for pulmonary TB (31 studies) were as follows: sensitivity 0.82 (95% CI 0.81-0.83), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 43.00 (28.23-64.81), negative likelihood ratio 0.16 (0.12-0.20), diagnostic odds ratio 324.26 (95% CI 189.08-556.09) and area under curve 0.99. Summary estimates for extra-pulmonary TB (25 studies) were as follows: sensitivity 0.70 (95% CI 0.67-0.72), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 29.82 (17.86-49.78), negative likelihood ratio 0.33 (0.26-0.42), diagnostic odds ratio 125.20 (95% CI 65.75-238.36) and area under curve 0.96. RT-PCR assay demonstrated a high degree of sensitivity for pulmonary TB and good sensitivity for extra-pulmonary TB. It indicated a high degree of specificity for ruling in TB infection from sampling regimes. This was acceptable, but may better as a rule out add-on diagnostic test. RT-PCR assays demonstrate both a high degree of sensitivity in pulmonary samples and rapidity of detection of TB which is an important factor in achieving effective global control and for patient management in terms of initiating early and appropriate anti-tubercular therapy. PROSPERO CRD42015027534 .

  6. Significance of parametric spectral ratio methods in detection and recognition of whispered speech

    NASA Astrophysics Data System (ADS)

    Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.

    2012-12-01

    In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.

  7. Urinary bladder segmentation in CT urography using deep-learning convolutional neural network and level sets

    PubMed Central

    Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.

    2016-01-01

    Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584

  8. Pleural Touch Preparations and Direct Visualization of the Pleura during Medical Thoracoscopy for the Diagnosis of Malignancy.

    PubMed

    Grosu, Horiana B; Vial-Rodriguez, Macarena; Vakil, Erik; Casal, Roberto F; Eapen, George A; Morice, Rodolfo; Stewart, John; Sarkiss, Mona G; Ost, David E

    2017-08-01

    During diagnostic thoracoscopy, talc pleurodesis after biopsy is appropriate if the probability of malignancy is sufficiently high. Findings on direct visual assessment of the pleura during thoracoscopy, rapid onsite evaluation (ROSE) of touch preparations (touch preps) of thoracoscopic biopsy specimens, and preoperative imaging may help predict the likelihood of malignancy; however, data on the performance of these methods are limited. To assess the performance of ROSE of touch preps, direct visual assessment of the pleura during thoracoscopy, and preoperative imaging in diagnosing malignancy. Patients who underwent ROSE of touch preps during thoracoscopy for suspected malignancy were retrospectively reviewed. Malignancy was diagnosed on the basis of final pathologic examination of pleural biopsy specimens. ROSE results were categorized as malignant, benign, or atypical cells. Visual assessment results were categorized as tumor studding present or absent. Positron emission tomography (PET) and computed tomography (CT) findings were categorized as abnormal or normal pleura. Likelihood ratios were calculated for each category of test result. The study included 44 patients, 26 (59%) with a final pathologic diagnosis of malignancy. Likelihood ratios were as follows: for ROSE of touch preps: malignant, 1.97 (95% confidence interval [CI], 0.90-4.34); atypical cells, 0.69 (95% CI, 0.21-2.27); benign, 0.11 (95% CI, 0.01-0.93); for direct visual assessment: tumor studding present, 3.63 (95% CI, 1.32-9.99); tumor studding absent, 0.24 (95% CI, 0.09-0.64); for PET: abnormal pleura, 9.39 (95% CI, 1.42-62); normal pleura, 0.24 (95% CI, 0.11-0.52); and for CT: abnormal pleura, 13.15 (95% CI, 1.93-89.63); normal pleura, 0.28 (95% CI, 0.15-0.54). A finding of no malignant cells on ROSE of touch preps during thoracoscopy lowers the likelihood of malignancy significantly, whereas finding of tumor studding on direct visual assessment during thoracoscopy only moderately increases the likelihood of malignancy. A positive finding on PET and/or CT increases the likelihood of malignancy significantly in a moderate-risk patient group and can be used as an adjunct to predict malignancy before pleurodesis.

  9. On the Power Functions of Test Statistics in Order Restricted Inference.

    DTIC Science & Technology

    1984-10-01

    California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that

  10. Three regularities of recognition memory: the role of bias.

    PubMed

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  11. TOO MANY MEN? SEX RATIOS AND WOMEN’S PARTNERING BEHAVIOR IN CHINA

    PubMed Central

    Trent, Katherine; South, Scott J.

    2011-01-01

    The relative numbers of women and men are changing dramatically in China, but the consequences of these imbalanced sex ratios have received little attention. We merge data from the Chinese Health and Family Life Survey with community-level data from Chinese censuses to examine the relationship between cohort- and community-specific sex ratios and women’s partnering behavior. Consistent with demographic-opportunity theory and sociocultural theory, we find that high sex ratios (indicating more men relative to women) are associated with an increased likelihood that women marry before age 25. However, high sex ratios are also associated with an increased likelihood that women engage in premarital and extramarital sexual relationships and have had more than one sexual partner, findings consistent with demographic-opportunity theory but inconsistent with sociocultural theory. PMID:22199403

  12. Detection of Obstructive Coronary Artery Disease Using Peak Systolic Global Longitudinal Strain Derived by Two-Dimensional Speckle-Tracking: A Systematic Review and Meta-Analysis.

    PubMed

    Liou, Kevin; Negishi, Kazuaki; Ho, Suyen; Russell, Elizabeth A; Cranney, Greg; Ooi, Sze-Yuan

    2016-08-01

    Global longitudinal strain (GLS) is well validated and has important applications in contemporary clinical practice. The aim of this analysis was to evaluate the accuracy of resting peak GLS in the diagnosis of obstructive coronary artery disease (CAD). A systematic literature search was performed through July 2015 using four databases. Data were extracted independently by two authors and correlated before analyses. Using a random-effect model, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and summary area under the curve for GLS were estimated with their respective 95% CIs. Screening of 1,669 articles yielded 10 studies with 1,385 patients appropriate for inclusion in the analysis. The mean age and left ventricular ejection fraction were 59.9 years and 61.1%. On the whole, 54.9% and 20.9% of the patients had hypertension and diabetes, respectively. Overall, abnormal GLS detected moderate to severe CAD with a pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 74.4%, 72.1%, 2.9, and 0.35 respectively. The area under the curve and diagnostic odds ratio were 0.81 and 8.5. The mean values of GLS for those with and without CAD were -16.5% (95% CI, -15.8% to -17.3%) and -19.7% (95% CI, -18.8% to -20.7%), respectively. Subgroup analyses for patients with severe CAD and normal left ventricular ejection fractions yielded similar results. Current evidence supports the use of GLS in the detection of moderate to severe obstructive CAD in symptomatic patients. GLS may complement existing diagnostic algorithms and act as an early adjunctive marker of cardiac ischemia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  13. Factors Associated with Young Adults’ Pregnancy Likelihood

    PubMed Central

    Kitsantas, Panagiota; Lindley, Lisa L.; Wu, Huichuan

    2014-01-01

    OBJECTIVES While progress has been made to reduce adolescent pregnancies in the United States, rates of unplanned pregnancy among young adults (18–29 years) remain high. In this study, we assessed factors associated with perceived likelihood of pregnancy (likelihood of getting pregnant/getting partner pregnant in the next year) among sexually experienced young adults who were not trying to get pregnant and had ever used contraceptives. METHODS We conducted a secondary analysis of 660 young adults, 18–29 years old in the United States, from the cross-sectional National Survey of Reproductive and Contraceptive Knowledge. Logistic regression and classification tree analyses were conducted to generate profiles of young adults most likely to report anticipating a pregnancy in the next year. RESULTS Nearly one-third (32%) of young adults indicated they believed they had at least some likelihood of becoming pregnant in the next year. Young adults who believed that avoiding pregnancy was not very important were most likely to report pregnancy likelihood (odds ratio [OR], 5.21; 95% CI, 2.80–9.69), as were young adults for whom avoiding a pregnancy was important but not satisfied with their current contraceptive method (OR, 3.93; 95% CI, 1.67–9.24), attended religious services frequently (OR, 3.0; 95% CI, 1.52–5.94), were uninsured (OR, 2.63; 95% CI, 1.31–5.26), and were likely to have unprotected sex in the next three months (OR, 1.77; 95% CI, 1.04–3.01). DISCUSSION These results may help guide future research and the development of pregnancy prevention interventions targeting sexually experienced young adults. PMID:25782849

  14. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  15. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  16. A close examination of double filtering with fold change and t test in microarray analysis

    PubMed Central

    2009-01-01

    Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439

  17. Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.

    PubMed

    Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria

    2009-02-01

    A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.

  18. The Fecal Microbiota Profile and Bronchiolitis in Infants

    PubMed Central

    Linnemann, Rachel W.; Mansbach, Jonathan M.; Ajami, Nadim J.; Espinola, Janice A.; Petrosino, Joseph F.; Piedra, Pedro A.; Stevenson, Michelle D.; Sullivan, Ashley F.; Thompson, Amy D.; Camargo, Carlos A.

    2016-01-01

    BACKGROUND: Little is known about the association of gut microbiota, a potentially modifiable factor, with bronchiolitis in infants. We aimed to determine the association of fecal microbiota with bronchiolitis in infants. METHODS: We conducted a case–control study. As a part of multicenter prospective study, we collected stool samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 115 age-matched healthy controls. By applying 16S rRNA gene sequencing and an unbiased clustering approach to these 155 fecal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. RESULTS: Overall, the median age was 3 months, 55% were male, and 54% were non-Hispanic white. Unbiased clustering of fecal microbiota identified 4 distinct profiles: Escherichia-dominant profile (30%), Bifidobacterium-dominant profile (21%), Enterobacter/Veillonella-dominant profile (22%), and Bacteroides-dominant profile (28%). The proportion of bronchiolitis was lowest in infants with the Enterobacter/Veillonella-dominant profile (15%) and highest in the Bacteroides-dominant profile (44%), corresponding to an odds ratio of 4.59 (95% confidence interval, 1.58–15.5; P = .008). In the multivariable model, the significant association between the Bacteroides-dominant profile and a greater likelihood of bronchiolitis persisted (odds ratio for comparison with the Enterobacter/Veillonella-dominant profile, 4.24; 95% confidence interval, 1.56–12.0; P = .005). In contrast, the likelihood of bronchiolitis in infants with the Escherichia-dominant or Bifidobacterium-dominant profile was not significantly different compared with those with the Enterobacter/Veillonella-dominant profile. CONCLUSIONS: In this case–control study, we identified 4 distinct fecal microbiota profiles in infants. The Bacteroides-dominant profile was associated with a higher likelihood of bronchiolitis. PMID:27354456

  19. Development of an algorithm for phenotypic screening of carbapenemase-producing Enterobacteriaceae in the routine laboratory.

    PubMed

    Robert, Jérôme; Pantel, Alix; Merens, Audrey; Meiller, Elodie; Lavigne, Jean-Philippe; Nicolas-Chanoine, Marie-Hélène

    2017-01-17

    Carbapenemase-producing Enterobacteriaceae (CPE) are difficult to identify among carbapenem non-susceptible Enterobacteriaceae (NSE). We designed phenotypic strategies giving priority to high sensitivity for screening putative CPE before further testing. Presence of carbapenemase-encoding genes in ertapenem NSE (MIC > 0.5 mg/l) consecutively isolated in 80 French laboratories between November 2011 and April 2012 was determined by the Check-MDR-CT103 array method. Using the Mueller-Hinton (MH) disk diffusion method, clinical diameter breakpoints of carbapenems other than ertapenem, piperazicillin+tazobactam, ticarcillin+clavulanate and cefepime as well as diameter cut-offs for these antibiotics and temocillin were evaluated alone or combined to determine their performances (sensitivity, specificity, positive and negative likelihood ratios) for identifying putative CPE among these ertapenem-NSE isolates. To increase the screening specificity, these antibiotics were also tested on cloxacillin-containing MH when carbapenem NSE isolates belonged to species producing chromosomal cephalosporinase (AmpC) but Escherichia coli. Out of the 349 ertapenem NSE, 52 (14.9%) were CPE, including 39 producing OXA-48 group carbapenemase, eight KPC and five MBL. A screening strategy based on the following diameter cut offs, ticarcillin+clavulanate <15 mm, temocillin <15 mm, meropenem or imipenem <22 mm, and cefepime <26 mm, showed 100% sensitivity and 68.1% specificity with the better likelihood ratios combination. The specificity increased when a diameter cut-off <32 mm for imipenem (76.1%) or meropenem (78.8%) further tested on cloxacillin-containing MH was added to the previous strategy for AmpC-producing isolates. The proposed strategies that allowed for increasing the likelihood of CPE among ertapenem-NSE isolates should be considered as a surrogate for carbapenemase production before further CPE confirmatory testing.

  20. A partial differential equation-based general framework adapted to Rayleigh's, Rician's and Gaussian's distributed noise for restoration and enhancement of magnetic resonance image.

    PubMed

    Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev

    2016-01-01

    The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.

  1. Models and analysis for multivariate failure time data

    NASA Astrophysics Data System (ADS)

    Shih, Joanna Huang

    The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.

  2. The Relation Among the Likelihood Ratio-, Wald-, and Lagrange Multiplier Tests and Their Applicability to Small Samples,

    DTIC Science & Technology

    1982-04-01

    S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES

  3. A Likelihood Ratio Test Regarding Two Nested But Oblique Order Restricted Hypotheses.

    DTIC Science & Technology

    1982-11-01

    Report #90 DIC JAN 2 411 ISMO. H American Mathematical Society 1979 subject classification Primary 62F03 Secondary 62E15 Key words and phrases: Order...model. A likelihood ratio test for these two restrictions is studied . Asa *a .on . r 373 RA&J *iii - ,sa~m muwod [] v~ -F: :.v"’. os "- 1...investigation was stimulated partly by a problem encountered in psychiatric research. [Winokur et al., 1971] studied data on psychiatric illnesses afflicting

  4. A unified partial likelihood approach for X-chromosome association on time-to-event outcomes.

    PubMed

    Xu, Wei; Hao, Meiling

    2018-02-01

    The expression of X-chromosome undergoes three possible biological processes: X-chromosome inactivation (XCI), escape of the X-chromosome inactivation (XCI-E), and skewed X-chromosome inactivation (XCI-S). Although these expressions are included in various predesigned genetic variation chip platforms, the X-chromosome has generally been excluded from the majority of genome-wide association studies analyses; this is most likely due to the lack of a standardized method in handling X-chromosomal genotype data. To analyze the X-linked genetic association for time-to-event outcomes with the actual process unknown, we propose a unified approach of maximizing the partial likelihood over all of the potential biological processes. The proposed method can be used to infer the true biological process and derive unbiased estimates of the genetic association parameters. A partial likelihood ratio test statistic that has been proved asymptotically chi-square distributed can be used to assess the X-chromosome genetic association. Furthermore, if the X-chromosome expression pertains to the XCI-S process, we can infer the correct skewed direction and magnitude of inactivation, which can elucidate significant findings regarding the genetic mechanism. A population-level model and a more general subject-level model have been developed to model the XCI-S process. Finite sample performance of this novel method is examined via extensive simulation studies. An application is illustrated with implementation of the method on a cancer genetic study with survival outcome. © 2017 WILEY PERIODICALS, INC.

  5. The Structured Clinical Interview for DSM-5 Internet Gaming Disorder: Development and Validation for Diagnosing IGD in Adolescents

    PubMed Central

    Koo, Hoon Jung; Han, Doug Hyun; Park, Sung-Yong

    2017-01-01

    Objective This study aimed to develop and validate a Structured Clinical Interview for Internet Gaming Disorder (SCI-IGD) in adolescents. Methods First, we generated preliminary items of the SCI-IGD based on the information from the DSM-5 literature reviews and expert consultations. Next, a total of 236 adolescents, from both community and clinical settings, were recruited to evaluate the psychometric properties of the SCI-IGD. Results First, the SCI-IGD was found to be consistent over the time period of about one month. Second, diagnostic concordances between the SCI-IGD and clinician's diagnostic impression were good to excellent. The Likelihood Ratio Positive and the Likelihood Ratio Negative estimates for the diagnosis of SCI-IGD were 10.93 and 0.35, respectively, indicating that SCI-IGD was ‘very useful test’ for identifying the presence of IGD and ‘useful test’ for identifying the absence of IGD. Third, SCI-IGD could identify disordered gamers from non-disordered gamers. Conclusion The implications and limitations of the study are also discussed. PMID:28096871

  6. Functional decline in the elderly with MCI: Cultural adaptation of the ADCS-ADL scale.

    PubMed

    Cintra, Fabiana Carla Matos da Cunha; Cintra, Marco Túlio Gualberto; Nicolato, Rodrigo; Bertola, Laiss; Ávila, Rafaela Teixeira; Malloy-Diniz, Leandro Fernandes; Moraes, Edgar Nunes; Bicalho, Maria Aparecida Camargos

    2017-07-01

    Translate, transcultural adaptation and application to Brazilian Portuguese of the Alzheimer's Disease Cooperative Study - Activities of Daily Living (ADCS-ADL) scale as a cognitive screening instrument. We applied the back translation added with pretest and bilingual methods. The sample was composed by 95 elderly individuals and their caregivers. Thirty-two (32) participants were diagnosed as mild cognitive impairment (MCI) patients, 33 as Alzheimer's disease (AD) patients and 30 were considered as cognitively normal individuals. There were only little changes on the scale. The Cronbach alpha coefficient was 0.89. The scores were 72.9 for control group, followed by MCI (65.1) and by AD (55.9), with a p-value < 0.001. The ROC curve value was 0.89. We considered a cut point of 72 and we observed a sensibility of 86.2%, specificity of 70%, positive predictive value of 86.2%, negative predictive value of 70%, positive likelihood ratio of 2.9 and negative likelihood ratio of 0.2. ADCS-ADL scale presents satisfactory psychometric properties to discriminate between MCI, AD and normal cognition.

  7. Hypothesis testing and earthquake prediction.

    PubMed

    Jackson, D D

    1996-04-30

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

  8. Hypothesis testing and earthquake prediction.

    PubMed Central

    Jackson, D D

    1996-01-01

    Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663

  9. PEPA test: fast and powerful differential analysis from relative quantitative proteomics data using shared peptides.

    PubMed

    Jacob, Laurent; Combes, Florence; Burger, Thomas

    2018-06-18

    We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.

  10. Using artificial intelligence to predict the risk for posterior capsule opacification after phacoemulsification.

    PubMed

    Mohammadi, Seyed-Farzad; Sabbaghi, Mostafa; Z-Mehrjardi, Hadi; Hashemi, Hassan; Alizadeh, Somayeh; Majdi, Mercede; Taee, Farough

    2012-03-01

    To apply artificial intelligence models to predict the occurrence of posterior capsule opacification (PCO) after phacoemulsification. Farabi Eye Hospital, Tehran, Iran. Clinical-based cross-sectional study. The posterior capsule status of eyes operated on for age-related cataract and the need for laser capsulotomy were determined. After a literature review, data polishing, and expert consultation, 10 input variables were selected. The QUEST algorithm was used to develop a decision tree. Three back-propagation artificial neural networks were constructed with 4, 20, and 40 neurons in 2 hidden layers and trained with the same transfer functions (log-sigmoid and linear transfer) and training protocol with randomly selected eyes. They were then tested on the remaining eyes and the networks compared for their performance. Performance indices were used to compare resultant models with the results of logistic regression analysis. The models were trained using 282 randomly selected eyes and then tested using 70 eyes. Laser capsulotomy for clinically significant PCO was indicated or had been performed 2 years postoperatively in 40 eyes. A sample decision tree was produced with accuracy of 50% (likelihood ratio 0.8). The best artificial neural network, which showed 87% accuracy and a positive likelihood ratio of 8, was achieved with 40 neurons. The area under the receiver-operating-characteristic curve was 0.71. In comparison, logistic regression reached accuracy of 80%; however, the likelihood ratio was not measurable because the sensitivity was zero. A prototype artificial neural network was developed that predicted posterior capsule status (requiring capsulotomy) with reasonable accuracy. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Diagnostic value of sTREM-1 in bronchoalveolar lavage fluid in ICU patients with bacterial lung infections: a bivariate meta-analysis.

    PubMed

    Shi, Jia-Xin; Li, Jia-Shu; Hu, Rong; Li, Chun-Hua; Wen, Yan; Zheng, Hong; Zhang, Feng; Li, Qin

    2013-01-01

    The serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is a useful biomarker in differentiating bacterial infections from others. However, the diagnostic value of sTREM-1 in bronchoalveolar lavage fluid (BALF) in lung infections has not been well established. We performed a meta-analysis to assess the accuracy of sTREM-1 in BALF for diagnosis of bacterial lung infections in intensive care unit (ICU) patients. We searched PUBMED, EMBASE and Web of Knowledge (from January 1966 to October 2012) databases for relevant studies that reported diagnostic accuracy data of BALF sTREM-1 in the diagnosis of bacterial lung infections in ICU patients. Pooled sensitivity, specificity, and positive and negative likelihood ratios were calculated by a bivariate regression analysis. Measures of accuracy and Q point value (Q*) were calculated using summary receiver operating characteristic (SROC) curve. The potential between-studies heterogeneity was explored by subgroup analysis. Nine studies were included in the present meta-analysis. Overall, the prevalence was 50.6%; the sensitivity was 0.87 (95% confidence interval (CI), 0.72-0.95); the specificity was 0.79 (95% CI, 0.56-0.92); the positive likelihood ratio (PLR) was 4.18 (95% CI, 1.78-9.86); the negative likelihood ratio (NLR) was 0.16 (95% CI, 0.07-0.36), and the diagnostic odds ratio (DOR) was 25.60 (95% CI, 7.28-89.93). The area under the SROC curve was 0.91 (95% CI, 0.88-0.93), with a Q* of 0.83. Subgroup analysis showed that the assay method and cutoff value influenced the diagnostic accuracy of sTREM-1. BALF sTREM-1 is a useful biomarker of bacterial lung infections in ICU patients. Further studies are needed to confirm the optimized cutoff value.

  12. Impact of Uncertainties in Exposure Assessment on Thyroid Cancer Risk among Persons in Belarus Exposed as Children or Adolescents Due to the Chernobyl Accident.

    PubMed

    Little, Mark P; Kwon, Deukwoo; Zablotska, Lydia B; Brenner, Alina V; Cahoon, Elizabeth K; Rozhko, Alexander V; Polyanskaya, Olga N; Minenko, Victor F; Golovanov, Ivan; Bouville, André; Drozdovitch, Vladimir

    2015-01-01

    The excess incidence of thyroid cancer in Ukraine and Belarus observed a few years after the Chernobyl accident is considered to be largely the result of 131I released from the reactor. Although the Belarus thyroid cancer prevalence data has been previously analyzed, no account was taken of dose measurement error. We examined dose-response patterns in a thyroid screening prevalence cohort of 11,732 persons aged under 18 at the time of the accident, diagnosed during 1996-2004, who had direct thyroid 131I activity measurement, and were resident in the most radio-actively contaminated regions of Belarus. Three methods of dose-error correction (regression calibration, Monte Carlo maximum likelihood, Bayesian Markov Chain Monte Carlo) were applied. There was a statistically significant (p<0.001) increasing dose-response for prevalent thyroid cancer, irrespective of regression-adjustment method used. Without adjustment for dose errors the excess odds ratio was 1.51 Gy- (95% CI 0.53, 3.86), which was reduced by 13% when regression-calibration adjustment was used, 1.31 Gy- (95% CI 0.47, 3.31). A Monte Carlo maximum likelihood method yielded an excess odds ratio of 1.48 Gy- (95% CI 0.53, 3.87), about 2% lower than the unadjusted analysis. The Bayesian method yielded a maximum posterior excess odds ratio of 1.16 Gy- (95% BCI 0.20, 4.32), 23% lower than the unadjusted analysis. There were borderline significant (p = 0.053-0.078) indications of downward curvature in the dose response, depending on the adjustment methods used. There were also borderline significant (p = 0.102) modifying effects of gender on the radiation dose trend, but no significant modifying effects of age at time of accident, or age at screening as modifiers of dose response (p>0.2). In summary, the relatively small contribution of unshared classical dose error in the current study results in comparatively modest effects on the regression parameters.

  13. Survivorship analysis when cure is a possibility: a Monte Carlo study.

    PubMed

    Goldman, A I

    1984-01-01

    Parametric survivorship analyses of clinical trials commonly involves the assumption of a hazard function constant with time. When the empirical curve obviously levels off, one can modify the hazard function model by use of a Gompertz or Weibull distribution with hazard decreasing over time. Some cancer treatments are thought to cure some patients within a short time of initiation. Then, instead of all patients having the same hazard, decreasing over time, a biologically more appropriate model assumes that an unknown proportion (1 - pi) have constant high risk whereas the remaining proportion (pi) have essentially no risk. This paper discusses the maximum likelihood estimation of pi and the power curves of the likelihood ratio test. Monte Carlo studies provide results for a variety of simulated trials; empirical data illustrate the methods.

  14. Methodology and method and apparatus for signaling with capacity optimized constellations

    NASA Technical Reports Server (NTRS)

    Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor)

    2011-01-01

    Communication systems having transmitter, includes a coder configured to receive user bits and output encoded bits at an expanded output encoded bit rate, a mapper configured to map encoded bits to symbols in a symbol constellation, a modulator configured to generate a signal for transmission via the communication channel using symbols generated by the mapper. In addition, the receiver includes a demodulator configured to demodulate the received signal via the communication channel, a demapper configured to estimate likelihoods from the demodulated signal, a decoder that is configured to estimate decoded bits from the likelihoods generated by the demapper. Furthermore, the symbol constellation is a capacity optimized geometrically spaced symbol constellation that provides a given capacity at a reduced signal-to-noise ratio compared to a signal constellation that maximizes d.sub.min.

  15. Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Chen, Huaiguang; Fu, Shujun; Wang, Hong; Lv, Hongli; Zhang, Caiming

    2018-03-01

    As a high-resolution imaging mode of biological tissues and materials, optical coherence tomography (OCT) is widely used in medical diagnosis and analysis. However, OCT images are often degraded by annoying speckle noise inherent in its imaging process. Employing the bilateral sparse representation an adaptive singular value shrinking method is proposed for its highly sparse approximation of image data. Adopting the generalized likelihood ratio as similarity criterion for block matching and an adaptive feature-oriented backward projection strategy, the proposed algorithm can restore better underlying layered structures and details of the OCT image with effective speckle attenuation. The experimental results demonstrate that the proposed algorithm achieves a state-of-the-art despeckling performance in terms of both quantitative measurement and visual interpretation.

  16. Role of transvaginal sonography and magnetic resonance imaging in the diagnosis of uterine adenomyosis.

    PubMed

    Bazot, Marc; Daraï, Emile

    2018-03-01

    The aim of the present review, conducted according to PRISMA statement recommendations, was to evaluate the contribution of transvaginal sonography (TVS) and magnetic resonance imaging (MRI) to diagnose adenomyosis. Although there is a lack of consensus on adenomyosis classification, three subtypes are described, internal, external adenomyosis, and adenomyomas. Using TVS, whatever the subtype, pooled sensitivities, pooled specificities, and pooled positive likelihood ratios are 0.72-0.82, 0.85-0.81, and 4.67-3.7, respectively, but with a high heterogeneity between the studies. MRI has a pooled sensitivity of 0.77, specificity of 0.89, positive likelihood ratio of 6.5, and negative likelihood ratio of 0.2 for all subtypes. Our results suggest that MRI is more useful than TVS in the diagnosis of adenomyosis. Further studies are required to determine the performance of direct signs (cystic component) and indirect signs (characteristics of junctional zone) to avoid misdiagnosis of adenomyosis. Copyright © 2018 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  17. Training loads and injury risk in Australian football-differing acute: chronic workload ratios influence match injury risk.

    PubMed

    Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E

    2017-08-01

    (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2-9 days) and 7 chronic time windows (14-35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R 2 ). The ratio of moderate speed running workload (18-24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R 2 =0.79) and in the immediate 2 or 5 days following matches (R 2 =0.76-0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98-2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  18. LIKELIHOOD RATIO TESTS OF HYPOTHESES ON MULTIVARIATE POPULATIONS, VOLUME II, TEST OF HYPOTHESIS--STATISTICAL MODELS FOR THE EVALUATION AND INTERPRETATION OF EDUCATIONAL CRITERIA. PART 4.

    ERIC Educational Resources Information Center

    SAW, J.G.

    THIS PAPER DEALS WITH SOME TESTS OF HYPOTHESIS FREQUENTLY ENCOUNTERED IN THE ANALYSIS OF MULTIVARIATE DATA. THE TYPE OF HYPOTHESIS CONSIDERED IS THAT WHICH THE STATISTICIAN CAN ANSWER IN THE NEGATIVE OR AFFIRMATIVE. THE DOOLITTLE METHOD MAKES IT POSSIBLE TO EVALUATE THE DETERMINANT OF A MATRIX OF HIGH ORDER, TO SOLVE A MATRIX EQUATION, OR TO…

  19. Maximum likelihood solution for inclination-only data in paleomagnetism

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2010-08-01

    We have developed a new robust maximum likelihood method for estimating the unbiased mean inclination from inclination-only data. In paleomagnetic analysis, the arithmetic mean of inclination-only data is known to introduce a shallowing bias. Several methods have been introduced to estimate the unbiased mean inclination of inclination-only data together with measures of the dispersion. Some inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all the methods require various assumptions and approximations that are often inappropriate. For some steep and dispersed data sets, these methods provide estimates that are significantly displaced from the peak of the likelihood function to systematically shallower inclination. The problem locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest, because some elements of the likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study, we succeeded in analytically cancelling exponential elements from the log-likelihood function, and we are now able to calculate its value anywhere in the parameter space and for any inclination-only data set. Furthermore, we can now calculate the partial derivatives of the log-likelihood function with desired accuracy, and locate the maximum likelihood without the assumptions required by previous methods. To assess the reliability and accuracy of our method, we generated large numbers of random Fisher-distributed data sets, for which we calculated mean inclinations and precision parameters. The comparisons show that our new robust Arason-Levi maximum likelihood method is the most reliable, and the mean inclination estimates are the least biased towards shallow values.

  20. Accuracy of maximum likelihood and least-squares estimates in the lidar slope method with noisy data.

    PubMed

    Eberhard, Wynn L

    2017-04-01

    The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.

  1. Integral equation methods for computing likelihoods and their derivatives in the stochastic integrate-and-fire model.

    PubMed

    Paninski, Liam; Haith, Adrian; Szirtes, Gabor

    2008-02-01

    We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.

  2. An evaluation of inferential procedures for adaptive clinical trial designs with pre-specified rules for modifying the sample size.

    PubMed

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2014-09-01

    Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.

  3. Value of circulating cell-free DNA analysis as a diagnostic tool for breast cancer: a meta-analysis

    PubMed Central

    Ma, Xuelei; Zhang, Jing; Hu, Xiuying

    2017-01-01

    Objectives The aim of this study was to systematically evaluate the diagnostic value of cell free DNA (cfDNA) for breast cancer. Results Among 308 candidate articles, 25 with relevant diagnostic screening qualified for final analysis. The mean sensitivity, specificity and area under the curve (AUC) of SROC plots for 24 studies that distinguished breast cancer patients from healthy controls were 0.70, 0.87, and 0.9314, yielding a DOR of 32.31. When analyzed in subgroups, the 14 quantitative studies produced sensitivity, specificity, AUC, and a DOR of 0.78, 0.83, 0.9116, and 24.40. The 10 qualitative studies produced 0.50, 0.98, 0.9919, and 68.45. For 8 studies that distinguished malignant breast cancer from benign diseases, the specificity, sensitivity, AUC and DOR were 0.75, 0.79, 0.8213, and 9.49. No covariate factors had a significant correlation with relative DOR. Deek's funnel plots indicated an absence of publication bias. Materials and Methods Databases were searched for studies involving the use of cfDNA to diagnose breast cancer. The studies were analyzed to determine sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR), and the summary receiver operating characteristic (SROC). Covariates were evaluated for effect on relative DOR. Deek's Funnel plots were generated to measure publication bias. Conclusions Our analysis suggests a promising diagnostic potential of using cfDNA for breast cancer screening, but this diagnostic method is not yet independently sufficient. Further work refining qualitative cfDNA assays will improve the correct diagnosis of breast cancers. PMID:28460452

  4. Estimating seismic site response in Christchurch City (New Zealand) from dense low-cost aftershock arrays

    USGS Publications Warehouse

    Kaiser, Anna E.; Benites, Rafael A.; Chung, Angela I.; Haines, A. John; Cochran, Elizabeth S.; Fry, Bill

    2011-01-01

    The Mw 7.1 September 2010 Darfield earthquake, New Zealand, produced widespread damage and liquefaction ~40 km from the epicentre in Christchurch city. It was followed by the even more destructive Mw 6.2 February 2011 Christchurch aftershock directly beneath the city’s southern suburbs. Seismic data recorded during the two large events suggest that site effects contributed to the variations in ground motion observed throughout Christchurch city. We use densely-spaced aftershock recordings of the Darfield earthquake to investigate variations in local seismic site response within the Christchurch urban area. Following the Darfield main shock we deployed a temporary array of ~180 low-cost 14-bit MEMS accelerometers linked to the global Quake-Catcher Network (QCN). These instruments provided dense station coverage (spacing ~2 km) to complement existing New Zealand national network strong motion stations (GeoNet) within Christchurch city. Well-constrained standard spectral ratios were derived for GeoNet stations using a reference station on Miocene basalt rock in the south of the city. For noisier QCN stations, the method was adapted to find a maximum likelihood estimate of spectral ratio amplitude taking into account the variance of noise at the respective stations. Spectral ratios for QCN stations are similar to nearby GeoNet stations when the maximum likelihood method is used. Our study suggests dense low-cost accelerometer aftershock arrays can provide useful information on local-scale ground motion properties for use in microzonation. Preliminary results indicate higher amplifications north of the city centre and strong high-frequency amplification in the small, shallower basin of Heathcote Valley.

  5. IS THE IMMUNOCROMATOGRAPHIC FECAL ANTIGEN TEST EFFECTIVE FOR PRIMARY DIAGNOSIS OF HELICOBACTER PYLORI INFECTION IN DYSPEPTIC PATIENTS?

    PubMed

    Dalla Nora, Magali; Hörner, Rosmari; De Carli, Diego Michelon; Rocha, Marta Pires da; Araujo, Amanda Faria de; Fagundes, Renato Borges

    2016-01-01

    The diagnosis of H. pylori infection can be performed by non-invasive and invasive methods.The identification through a fecal antigen test is a non-invasive, simple, and relatively inexpensive test. To determine the diagnostic performance of fecal antigen test in the identification of H. pylori infection. H. pylori antigens were identified in the stools of dyspeptic patients undergoing upper gastrointestinal endoscopy. For the identification of H. pylori antigen, we use ImmunoCard STAT! HpSA with immunochromatography technique. Histopathology plus urease test were the gold standard. We studied 163 patients, 51% male, mean age of 56.7± 8.5years. H. pylori infection was present in 49%. Fecal test presented: sensitivity 67.5% (CI95% 60.6-72.9); specificity 85.5% (CI95% 78.9-90.7); positive predictive value 81.8% (CI95% 73.4-88.4) and negative predictive value 73,2% (CI95% 67.5-77.6); Positive likelihood ratio was 4.7 (CI95% 2.9-7.9) and Negative Likelihood Ratio 0.4 (CI95% 0.3-0.5). The prevalence odds ratio for a positive test was 12.3 (CI95% 5.7-26.3).The index kappa between FAT and histology/urease test was 0.53 (CI95% 0.39-0.64). Immunochromatographic FAT is less expensive than the other methods and readily accepted by the patients but its diagnostic performance does not recommend its use in the primary diagnosis, when the patient may have an active infection.

  6. Effectiveness of sampling methods employed for Acanthamoeba keratitis diagnosis by culture.

    PubMed

    Muiño, Laura; Rodrigo, Donoso; Villegas, Rodrigo; Romero, Pablo; Peredo, Daniel E; Vargas, Rafael A; Liempi, Daniela; Osuna, Antonio; Jercic, María Isabel

    2018-06-18

    This retrospective, observational study was designed to evaluate the effectiveness of the sampling methods commonly used for the collection of corneal scrapes for the diagnosis of Acanthamoeba keratitis (AK) by culture, in terms of their ability to provide a positive result. A total of 553 samples from 380 patients with suspected AK received at the Parasitology Section of the Public Health Institute of Chile, between January 2005 and December 2015, were evaluated. A logistic regression model was used to determine the correlation between the culture outcome (positive or negative) and the method for sample collection. The year of sample collection was also included in the analysis as a confounding variable. Three hundred and sixty-five samples (27%) from 122 patients (32.1%) were positive by culture. The distribution of sample types was as follows: 142 corneal scrapes collected using a modified bezel needle (a novel method developed by a team of Chilean corneologists), 176 corneal scrapes obtained using a scalpel, 50 corneal biopsies, 30 corneal swabs, and 155 non-biological materials including contact lens and its paraphernalia. Biopsy provided the highest likelihood ratio for a positive result by culture (1.89), followed by non-biological materials (1.10) and corneal scrapes obtained using a modified needle (1.00). The lowest likelihood ratio was estimated for corneal scrapes obtained using a scalpel (0.88) and cotton swabs (0.78). Apart from biopsy, optimum corneal samples for the improved diagnosis of AK can be obtained using a modified bezel needle instead of a scalpel, while cotton swabs are not recommended.

  7. Implementation and assessment of a likelihood ratio approach for the evaluation of LA-ICP-MS evidence in forensic glass analysis.

    PubMed

    van Es, Andrew; Wiarda, Wim; Hordijk, Maarten; Alberink, Ivo; Vergeer, Peter

    2017-05-01

    For the comparative analysis of glass fragments, a method using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is in use at the NFI, giving measurements of the concentration of 18 elements. An important question is how to evaluate the results as evidence that a glass sample originates from a known glass source or from an arbitrary different glass source. One approach is the use of matching criteria e.g. based on a t-test or overlap of confidence intervals. An important drawback of this method is the fact that the rarity of the glass composition is not taken into account. A similar match can have widely different evidential values. In addition the use of fixed matching criteria can give rise to a "fall off the cliff" effect. Small differences may result in a match or a non-match. In this work a likelihood ratio system is presented, largely based on the two-level model as proposed by Aitken and Lucy [1], and Aitken, Zadora and Lucy [2]. Results show that the output from the two-level model gives good discrimination between same and different source hypotheses, but a post-hoc calibration step is necessary to improve the accuracy of the likelihood ratios. Subsequently, the robustness and performance of the LR system are studied. Results indicate that the output of the LR system is robust to the sample properties of the dataset used for calibration. Furthermore, the empirical upper and lower bound method [3], designed to deal with extrapolation errors in the density models, results in minimum and maximum values of the LR outputted by the system of 3.1×10 -3 and 3.4×10 4 . Calibration of the system, as measured by empirical cross-entropy, shows good behavior over the complete prior range. Rates of misleading evidence are small: for same-source comparisons, 0.3% of LRs support a different-source hypothesis; for different-source comparisons, 0.2% supports a same-source hypothesis. The authors use the LR system in reporting of glass cases to support expert opinion in the interpretation of glass evidence for origin of source questions. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  8. A feasibility study on bedside upper airway ultrasonography compared to waveform capnography for verifying endotracheal tube location after intubation

    PubMed Central

    2013-01-01

    Background In emergency settings, verification of endotracheal tube (ETT) location is important for critically ill patients. Ignorance of oesophageal intubation can be disastrous. Many methods are used for verification of the endotracheal tube location; none are ideal. Quantitative waveform capnography is considered the standard of care for this purpose but is not always available and is expensive. Therefore, this feasibility study is conducted to compare a cheaper alternative, bedside upper airway ultrasonography to waveform capnography, for verification of endotracheal tube location after intubation. Methods This was a prospective, single-centre, observational study, conducted at the HRPB, Ipoh. It included patients who were intubated in the emergency department from 28 March 2012 to 17 August 2012. A waiver of consent had been obtained from the Medical Research Ethics Committee. Bedside upper airway ultrasonography was performed after intubation and compared to waveform capnography. Specificity, sensitivity, positive and negative predictive value and likelihood ratio are calculated. Results A sample of 107 patients were analysed, and 6 (5.6%) had oesophageal intubations. The overall accuracy of bedside upper airway ultrasonography was 98.1% (95% confidence interval (CI) 93.0% to 100.0%). The kappa value (Κ) was 0.85, indicating a very good agreement between the bedside upper airway ultrasonography and waveform capnography. Thus, bedside upper airway ultrasonography is in concordance with waveform capnography. The sensitivity, specificity, positive predictive value and negative predictive value of bedside upper airway ultrasonography were 98.0% (95% CI 93.0% to 99.8%), 100% (95% CI 54.1% to 100.0%), 100% (95% CI 96.3% to 100.0%) and 75.0% (95% CI 34.9% to 96.8%). The likelihood ratio of a positive test is infinite and the likelihood ratio of a negative test is 0.0198 (95% CI 0.005 to 0.0781). The mean confirmation time by ultrasound is 16.4 s. No adverse effects were recorded. Conclusions Our study shows that ultrasonography can replace waveform capnography in confirming ETT placement in centres without capnography. This can reduce incidence of unrecognised oesophageal intubation and prevent morbidity and mortality. Trial registration National Medical Research Register NMRR11100810230. PMID:23826756

  9. Robustness of fit indices to outliers and leverage observations in structural equation modeling.

    PubMed

    Yuan, Ke-Hai; Zhong, Xiaoling

    2013-06-01

    Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  10. Can 3-dimensional power Doppler indices improve the prenatal diagnosis of a potentially morbidly adherent placenta in patients with placenta previa?

    PubMed

    Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J

    2017-08-01

    Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Fecal immunochemical test for predicting mucosal healing in ulcerative colitis patients: A systematic review and meta-analysis.

    PubMed

    Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin

    2018-05-01

    Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  12. Center for Intelligent Control Systems

    DTIC Science & Technology

    1992-12-01

    difficult than anyone expected 50 years ago, and it now seems that it will require inputs from such diver fields as brain and cognitive sience ...9/147 Wilisky. A.S. 17 Fleming, W.H. P Two-Player. Zero-Sum Differential Games 5/1/87 Sougmnidis, PS. 18 Gemnan, S-A. P Ststistical Methods for...Mansour, Y. Shavit, N. 175 Tshsiklis, J.N. P Extremal Properties of Likelihood-Ratio Quantizers 11/1/89 176 Awerbuch, B. P Online Tracking of Mobile

  13. Practical Implementation of Multiple Model Adaptive Estimation Using Neyman-Pearson Based Hypothesis Testing and Spectral Estimation Tools

    DTIC Science & Technology

    1996-09-01

    Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the

  14. A diagnostic-ratio approach to measuring beliefs about the leadership abilities of male and female managers.

    PubMed

    Martell, R F; Desmet, A L

    2001-12-01

    This study departed from previous research on gender stereotyping in the leadership domain by adopting a more comprehensive view of leadership and using a diagnostic-ratio measurement strategy. One hundred and fifty-one managers (95 men and 56 women) judged the leadership effectiveness of male and female middle managers by providing likelihood ratings for 14 categories of leader behavior. As expected, the likelihood ratings for some leader behaviors were greater for male managers, whereas for other leader behaviors, the likelihood ratings were greater for female managers or were no different. Leadership ratings revealed some evidence of a same-gender bias. Providing explicit verification of managerial success had only a modest effect on gender stereotyping. The merits of adopting a probabilistic approach in examining the perception and treatment of stigmatized groups are discussed.

  15. Fault detection and isolation in GPS receiver autonomous integrity monitoring based on chaos particle swarm optimization-particle filter algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao

    2018-03-01

    The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.

  16. Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.

    PubMed

    Tvedebrink, Torben; Morling, Niels

    2015-12-01

    The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Experimental study of near-field entrainment of moderately overpressured jets

    USGS Publications Warehouse

    Solovitz, S.A.; Mastin, L.G.; Saffaraval, F.

    2011-01-01

    Particle image velocimetry (PIV) experiments have been conducted to study the velocity flow fields in the developing flow region of high-speed jets. These velocity distributions were examined to determine the entrained mass flow over a range of geometric and flow conditions, including overpressured cases up to an overpressure ratio of 2.83. In the region near the jet exit, all measured flows exhibited the same entrainment up until the location of the first shock when overpressured. Beyond this location, the entrainment was reduced with increasing overpressure ratio, falling to approximately 60 of the magnitudes seen when subsonic. Since entrainment ratios based on lower speed, subsonic results are typically used in one-dimensional volcanological models of plume development, the current analytical methods will underestimate the likelihood of column collapse. In addition, the concept of the entrainment ratio normalization is examined in detail, as several key assumptions in this methodology do not apply when overpressured.

  18. Hybrid approach combining chemometrics and likelihood ratio framework for reporting the evidential value of spectra.

    PubMed

    Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema

    2016-08-10

    Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Using the β-binomial distribution to characterize forest health

    Treesearch

    S.J. Zarnoch; R.L. Anderson; R.M. Sheffield

    1995-01-01

    The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...

  20. Multi-target Detection, Tracking, and Data Association on Road Networks Using Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Barkley, Brett E.

    A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.

  1. Assessment of glenohumeral subluxation in poststroke hemiplegia: comparison between ultrasound and fingerbreadth palpation methods.

    PubMed

    Kumar, Praveen; Mardon, Marianne; Bradley, Michael; Gray, Selena; Swinkels, Annette

    2014-11-01

    Glenohumeral subluxation (GHS) is a common poststroke complication. Treatment of GHS is hampered by the lack of objective, real-time clinical measurements. The aims of this study were: (1) to compare an ultrasound method of GHS measurement with the fingerbreadth palpation method using a receiver operating characteristic curve (ROC) and (2) to report the sensitivity and specificity of this method. A prospective study was conducted. The study was conducted in local hospitals and day centers in the southwest of England. One hundred five patients who had one-sided weakness following a first-time stroke (51 men, 54 women; mean age=71 years, SD=11) and who gave informed consent were enrolled in the study. Ultrasound measurements of acromion-greater tuberosity (AGT) distance were used for the assessment of GHS. Measurements were undertaken on both shoulders by a research physical therapist trained in shoulder ultrasound with the patient seated in a standardized position. Fingerbreadth palpation assessment of GHS was undertaken by a clinical physical therapist based at the hospital, who also visited the day centers. The area under the ROC curve was 0.73 (95% confidence interval [95% CI]=0.63, 0.83), suggesting that the ultrasound method has good agreement compared with the fingerbreadth palpation method. A cutoff point of ≥0.2 cm AGT measurement difference between affected and unaffected shoulders generated a sensitivity of 68% (95% CI=51%, 75%), a specificity of 62% (95% CI=47%, 80%), a positive likelihood ratio of 1.79 (95% CI=1.1, 2.9), and a negative likelihood ratio of 0.55 (95% CI=0.4, 0.8). Clinical therapists involved in the routine care of patients conducted the fingerbreadth palpation method. It is likely that they were aware of the patients' subluxation status. The ultrasound method can detect minor asymmetry (≤0.5 cm) and has the potential advantage over the fingerbreadth palpation method of identifying patients with minor subluxation. © 2014 American Physical Therapy Association.

  2. Optimum detection of tones transmitted by a spacecraft

    NASA Technical Reports Server (NTRS)

    Simon, M. K.; Shihabi, M. M.; Moon, T.

    1995-01-01

    The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.

  3. A case study for the integration of predictive mineral potential maps

    NASA Astrophysics Data System (ADS)

    Lee, Saro; Oh, Hyun-Joo; Heo, Chul-Ho; Park, Inhye

    2014-09-01

    This study aims to elaborate on the mineral potential maps using various models and verify the accuracy for the epithermal gold (Au) — silver (Ag) deposits in a Geographic Information System (GIS) environment assuming that all deposits shared a common genesis. The maps of potential Au and Ag deposits were produced by geological data in Taebaeksan mineralized area, Korea. The methodological framework consists of three main steps: 1) identification of spatial relationships 2) quantification of such relationships and 3) combination of multiple quantified relationships. A spatial database containing 46 Au-Ag deposits was constructed using GIS. The spatial association between training deposits and 26 related factors were identified and quantified by probabilistic and statistical modelling. The mineral potential maps were generated by integrating all factors using the overlay method and recombined afterwards using the likelihood ratio model. They were verified by comparison with test mineral deposit locations. The verification revealed that the combined mineral potential map had the greatest accuracy (83.97%), whereas it was 72.24%, 65.85%, 72.23% and 71.02% for the likelihood ratio, weight of evidence, logistic regression and artificial neural network models, respectively. The mineral potential map can provide useful information for the mineral resource development.

  4. Evaluation of a new point-of-care test for influenza A and B virus in travellers with influenza-like symptoms.

    PubMed

    Weitzel, T; Schnabel, E; Dieckmann, S; Börner, U; Schweiger, B

    2007-07-01

    Point-of-care (POC) tests for influenza facilitate clinical case management, and might also be helpful in the care of travellers who are at special risk for influenza infection. To evaluate influenza POC testing in travellers, a new assay, the ImmunoCard STAT! Flu A and B, was used to investigate travellers presenting with influenza-like symptoms. Influenza virus infection was diagnosed in 27 (13%) of 203 patients by influenza virus-specific PCR and viral culture. The POC test had sensitivity and specificity values of 64% and 99% for influenza A, and 67% and 100% for influenza B, respectively. Combined sensitivity and specificity were 67% and 99%, respectively, yielding positive and negative predictive values of 95%, and positive and negative likelihood ratios of 117 and 0.34, respectively. The convenient application, excellent specificity and high positive likelihood ratio of the POC test allowed rapid identification of influenza cases. However, negative test results might require confirmation by other methods because of limitations in sensitivity. Overall, influenza POC testing appeared to be a useful tool for the management of travellers with influenza-like symptoms.

  5. Point of Care Ultrasound Accurately Distinguishes Inflammatory from Noninflammatory Disease in Patients Presenting with Abdominal Pain and Diarrhea

    PubMed Central

    Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo

    2016-01-01

    Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, accurate, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. Results. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can accurately be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838

  6. Investigating pulmonary embolism in the emergency department with lower limb plethysmography: the Manchester Investigation of Pulmonary Embolism Diagnosis (MIOPED) study

    PubMed Central

    Hogg, K; Dawson, D; Mackway‐Jones, K

    2006-01-01

    Objectives To measure the diagnostic accuracy of computerised strain gauge plethysmography in the diagnosis of pulmonary embolism (PE). Methods Two researchers prospectively recruited 425 patients with pleuritic chest pain presenting to the emergency department (ED). Lower limb computerised strain gauge plethysmography was performed in the ED. All patients underwent an independent reference standard diagnostic algorithm to establish the presence or absence of PE. A low modified Wells' clinical probability combined with a normal D‐dimer excluded PE. All others required diagnostic imaging with PIOPED interpreted ventilation perfusion scanning and/or computerised tomography (CT) pulmonary angiography. Patients with a nondiagnostic CT had digital subtraction pulmonary angiography. All patients were followed up clinically for 3 months. Results The sensitivity of computerised strain gauge plethysmography was 33.3% (95% confidence interval (CI) 16.3 to 56.2%) and specificity 64.1% (95% CI 59.0 to 68.8%). The negative likelihood ratio was 1.04 (95% CI 0.68 to 1.33) and positive likelihood ratio 0.93 (95% CI 0.45 to 1.60). Conclusions Lower limb computerised strain gauge plethysmography does not aid in the diagnosis of PE. PMID:16439734

  7. The comparative accuracy of rapid diagnostic test with microscopy to diagnose malaria in subdistrict lima puluh batubara regency North Sumatera province

    NASA Astrophysics Data System (ADS)

    Rezeki, S.; Pasaribu, A. P.

    2018-03-01

    Indonesia is the country where malaria is still the most common population problem. The high rate of mortality and morbidity occurred due to delays in diagnosis whichis strongly influenced by the availability of diagnostic tools and personnel with required laboratory skill. This diagnostic study aims to compare the accuracy of a Rapid Diagnostic Test (RDT) without skill requirement, to agold standard microscopic method for malaria diagnosis. The study was conducted in Subdistrict Lima Puluh North Sumatera Province from December 2015 to January 2016. The subject was taken cross-sectionally from a population with characteristics typically found in malaria patients in Subdistrict Lima Puluh. The result showed a sensitivity of 100% and a specificity of 72.4% with a positive predictive value of 89.9% and a negative predictive value of 100%; the negative likelihood ratio is 0 and the positive likelihood ratio of 27.6 for Parascreen. This research indicates that Parascreen had a high sensitivity and specificity and may be consideredas an alternative for the diagnosis of malaria in Subdistrict Lima Puluh North Sumatera Province especially in areas where no skilled microscopist is available.

  8. Measures of accuracy and performance of diagnostic tests.

    PubMed

    Drobatz, Kenneth J

    2009-05-01

    Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.

  9. Search for Point Sources of Ultra-High-Energy Cosmic Rays above 4.0 × 1019 eV Using a Maximum Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2005-04-01

    We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.

  10. Recreating a functional ancestral archosaur visual pigment.

    PubMed

    Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P

    2002-09-01

    The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.

  11. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  12. Top pair production in the dilepton decay channel with a tau lepton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbo, Matteo

    2012-09-19

    The top quark pair production and decay into leptons with at least one being a τ lepton is studied in the framework of the CDF experiment at the Tevatron proton antiproton collider at Fermilab (USA). The selection requires an electron or a muon produced either by the τ lepton decay or by a W decay. The analysis uses the complete Run II data set i.e. 9.0 fb -1, selected by one trigger based on a low transverse momentum electron or muon plus one isolated charged track. The top quark pair production cross section at 1.96 TeV is measured at 8.2more » ± 1.7 +1.2 -1.1 ± 0.5 pb, and the top branching ratio into τ lepton is measured at 0.120 ± 0.027 +0.022 -0.019 ± 0.007 with statistical, systematics and luminosity uncertainties. These are up to date the most accurate results in this top decay channel and are in good agreement with the results obtained using other decay channels of the top at the Tevatron. The branching ratio is also measured separating the single lepton from the two leptons events with a log likelihood method. This is the first time these two signatures are separately identified. With a fit to data along the log-likelihood variable an alternative measurement of the branching ratio is made: 0.098 ± 0.022(stat:) ± 0.014(syst:); it is in good agreement with the expectations of the Standard Model (with lepton universality) within the experimental uncertainties. The branching ratio is constrained to be less than 0.159 at 95% con dence level. This limit translates into a limit of a top branching ratio into a potential charged Higgs boson.« less

  13. 12 CFR 700.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...

  14. Demodulation of messages received with low signal to noise ratio

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Quignon, T.; Romann, B.

    The implementation of this all-digital demodulator is derived from maximum likelihood considerations applied to an analytical representation of the received signal. Traditional adapted filters and phase lock loops are replaced by minimum variance estimators and hypothesis tests. These statistical tests become very simple when working on phase signal. These methods, combined with rigorous control data representation allow significant computation savings as compared to conventional realizations. Nominal operation has been verified down to energetic signal over noise of -3 dB upon a QPSK demodulator.

  15. Diagnostic performance of coronary computed tomography angiography versus exercise electrocardiography for coronary artery disease: a systematic review and meta-analysis.

    PubMed

    Yin, Xinxin; Wang, Jiali; Zheng, Wen; Ma, Jingjing; Hao, Panpan; Chen, Yuguo

    2016-07-01

    Both coronary computed tomography angiography (CCTA) and exercise electrocardiography (ExECG) are non-invasive testing methods for the evaluation of coronary artery disease (CAD). However, there was controversy on the diagnostic performance of these methods due to the limited data in each single study. Therefore, we performed a meta-analysis to address these issues. We searched PubMed and Embase databases up to May 22, 2015. Two authors identified eligible studies, extracted data and accessed quality. Pooled estimation of sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), summary receiver-operating characteristic curve (SROC) and the area under curve (AUC) of CCTA and ExECG for the diagnosis of CAD were calculated using Stata, Meta-Disc and Review Manager statistical software. Seven articles were included. Pooled sensitivity of CCTA and ExECG were 0.98 [95% confidence intervals (CIs): 0.95-0.99] and 0.66 (95% CIs: 0.59-0.72); pooled specificity of CCTA and ExECG were 0.84 (95% CIs: 0.81-0.87) and 0.75 (95% CIs: 0.71-0.79); pooled DOR of CCTA and ExECG were 110.24 (95% CIs: 35.07-346.55) and 6.28 (95% CIs: 2.06-19.13); and AUC of CCTA and ExECG were 0.9950±0.0046 and 0.7727±0.0638, respectively. There is no heterogeneity caused by threshold effect in CCTA or ExECG analysis. The Deeks' test showed no potential publication bias (P=0.17). CCTA has better diagnostic performance than ExECG in the evaluation of CAD, which can provide a better solution for the clinical problem of the diagnosis for CAD.

  16. Novel CPR system that predicts return of spontaneous circulation from amplitude spectral area before electric shock in ventricular fibrillation.

    PubMed

    Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya

    2017-04-01

    Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood

    ERIC Educational Resources Information Center

    Karabatsos, George

    2017-01-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…

  18. Prolonged Operative Duration Increases Risk of Surgical Site Infections: A Systematic Review

    PubMed Central

    Chen, Brian Po-Han; Soleas, Ireena M.; Ferko, Nicole C.; Cameron, Chris G.; Hinoul, Piet

    2017-01-01

    Abstract Background: The incidence of surgical site infection (SSI) across surgical procedures, specialties, and conditions is reported to vary from 0.1% to 50%. Operative duration is often cited as an independent and potentially modifiable risk factor for SSI. The objective of this systematic review was to provide an in-depth understanding of the relation between operating time and SSI. Patients and Methods: This review included 81 prospective and retrospective studies. Along with study design, likelihood of SSI, mean operative times, time thresholds, effect measures, confidence intervals, and p values were extracted. Three meta-analyses were conducted, whereby odds ratios were pooled by hourly operative time thresholds, increments of increasing operative time, and surgical specialty. Results: Pooled analyses demonstrated that the association between extended operative time and SSI typically remained statistically significant, with close to twice the likelihood of SSI observed across various time thresholds. The likelihood of SSI increased with increasing time increments; for example, a 13%, 17%, and 37% increased likelihood for every 15 min, 30 min, and 60 min of surgery, respectively. On average, across various procedures, the mean operative time was approximately 30 min longer in patients with SSIs compared with those patients without. Conclusions: Prolonged operative time can increase the risk of SSI. Given the importance of SSIs on patient outcomes and health care economics, hospitals should focus efforts to reduce operative time. PMID:28832271

  19. Adult Age Differences in Frequency Estimations of Happy and Angry Faces

    ERIC Educational Resources Information Center

    Nikitin, Jana; Freund, Alexandra M.

    2015-01-01

    With increasing age, the ratio of gains to losses becomes more negative, which is reflected in expectations that positive events occur with a high likelihood in young adulthood, whereas negative events occur with a high likelihood in old age. Little is known about expectations of social events. Given that younger adults are motivated to establish…

  20. Reanalysis of cancer mortality in Japanese A-bomb survivors exposed to low doses of radiation: bootstrap and simulation methods

    PubMed Central

    2009-01-01

    Background The International Commission on Radiological Protection (ICRP) recommended annual occupational dose limit is 20 mSv. Cancer mortality in Japanese A-bomb survivors exposed to less than 20 mSv external radiation in 1945 was analysed previously, using a latency model with non-linear dose response. Questions were raised regarding statistical inference with this model. Methods Cancers with over 100 deaths in the 0 - 20 mSv subcohort of the 1950-1990 Life Span Study are analysed with Poisson regression models incorporating latency, allowing linear and non-linear dose response. Bootstrap percentile and Bias-corrected accelerated (BCa) methods and simulation of the Likelihood Ratio Test lead to Confidence Intervals for Excess Relative Risk (ERR) and tests against the linear model. Results The linear model shows significant large, positive values of ERR for liver and urinary cancers at latencies from 37 - 43 years. Dose response below 20 mSv is strongly non-linear at the optimal latencies for the stomach (11.89 years), liver (36.9), lung (13.6), leukaemia (23.66), and pancreas (11.86) and across broad latency ranges. Confidence Intervals for ERR are comparable using Bootstrap and Likelihood Ratio Test methods and BCa 95% Confidence Intervals are strictly positive across latency ranges for all 5 cancers. Similar risk estimates for 10 mSv (lagged dose) are obtained from the 0 - 20 mSv and 5 - 500 mSv data for the stomach, liver, lung and leukaemia. Dose response for the latter 3 cancers is significantly non-linear in the 5 - 500 mSv range. Conclusion Liver and urinary cancer mortality risk is significantly raised using a latency model with linear dose response. A non-linear model is strongly superior for the stomach, liver, lung, pancreas and leukaemia. Bootstrap and Likelihood-based confidence intervals are broadly comparable and ERR is strictly positive by bootstrap methods for all 5 cancers. Except for the pancreas, similar estimates of latency and risk from 10 mSv are obtained from the 0 - 20 mSv and 5 - 500 mSv subcohorts. Large and significant cancer risks for Japanese survivors exposed to less than 20 mSv external radiation from the atomic bombs in 1945 cast doubt on the ICRP recommended annual occupational dose limit. PMID:20003238

  1. VarBin, a novel method for classifying true and false positive variants in NGS data

    PubMed Central

    2013-01-01

    Background Variant discovery for rare genetic diseases using Illumina genome or exome sequencing involves screening of up to millions of variants to find only the one or few causative variant(s). Sequencing or alignment errors create "false positive" variants, which are often retained in the variant screening process. Methods to remove false positive variants often retain many false positive variants. This report presents VarBin, a method to prioritize variants based on a false positive variant likelihood prediction. Methods VarBin uses the Genome Analysis Toolkit variant calling software to calculate the variant-to-wild type genotype likelihood ratio at each variant change and position divided by read depth. The resulting Phred-scaled, likelihood-ratio by depth (PLRD) was used to segregate variants into 4 Bins with Bin 1 variants most likely true and Bin 4 most likely false positive. PLRD values were calculated for a proband of interest and 41 additional Illumina HiSeq, exome and whole genome samples (proband's family or unrelated samples). At variant sites without apparent sequencing or alignment error, wild type/non-variant calls cluster near -3 PLRD and variant calls typically cluster above 10 PLRD. Sites with systematic variant calling problems (evident by variant quality scores and biases as well as displayed on the iGV viewer) tend to have higher and more variable wild type/non-variant PLRD values. Depending on the separation of a proband's variant PLRD value from the cluster of wild type/non-variant PLRD values for background samples at the same variant change and position, the VarBin method's classification is assigned to each proband variant (Bin 1 to Bin 4). Results To assess VarBin performance, Sanger sequencing was performed on 98 variants in the proband and background samples. True variants were confirmed in 97% of Bin 1 variants, 30% of Bin 2, and 0% of Bin 3/Bin 4. Conclusions These data indicate that VarBin correctly classifies the majority of true variants as Bin 1 and Bin 3/4 contained only false positive variants. The "uncertain" Bin 2 contained both true and false positive variants. Future work will further differentiate the variants in Bin 2. PMID:24266885

  2. Accuracy of 99mTc (V)-Dimercaptosuccinic Acid Scintigraphy and Fecal Calprotectin Compared with Colonoscopy in Localizing Active Lesions in Inflammatory Bowel Disease

    PubMed Central

    Basirat, Vahid; Azizi, Zahra; Javid Anbardan, Sanam; Taghizadeh Asl, Mina; Farbod, Yasaman; Teimouri, Azam; Ebrahimi Daryani, Nasser

    2016-01-01

    INTRODUCTION Due to limitation of colonoscopy in assessing the entire bowel and patients’ intolerance in inflammatory bowel disease (IBD), in the current study, we aimed to prospectively compare the accuracy of 99mTc(V)-dimercaptosuccinic acid (DMSA) and fecal calprotectin with ileocolonoscopy as new methods for localizing inflammations. METHODS Current prospective study conducted between 2012 and 2014 on 30 patients with IBD attending Gastroenterology Clinic of Tehran University of Medical Sciences. Fecal calprotectin and disease activity were measured for all participants and all of them underwent 99mTc (V)-DMSA scintigraphy and colonoscopy. The accuracy of 99mTc (V)-DMSA scintigraphy and calprotectin in localizing bowel lesions were calculated. RESULTS A total of 22 patients with ulcerative colitis (UC) and 8 patients with Crohn’s disease (CD) were evaluated in our study. Sensitivity, positive likelihood ratio (PLR), and positive predictive value (PPV) of scintigraphy and calprotectin over colonoscopy in localization of UC lesions were 86.36%, 0.86%, 100.00% and 90.91%, 0.91, and 100.00%, respectively. Meanwhile, it showed 66.67% sensitivity and 81.25% specificity with PLR=3.56, negative likelihood ratio (NLR)=0.41, PPV=84.21%, and negative predictive value (NPV)= 61.90% in localizing lesions in patients with CD. The calprotectin level had sensitivity, PLR, and PPV of 90.00%, 0.90, and 100.00% in detecting active disease over colonoscopy, respectively. CONCLUSION The 99mTc (V)-DMSA scintigraphy would be an accurate method for detecting active inflammation in follow-up of patients with IBD and assessing response to treatment as a non-invasive and complementary method beside colonoscopy for more accurate diagnosis of CD or UC. PMID:27698971

  3. Bias correction in the hierarchical likelihood approach to the analysis of multivariate survival data.

    PubMed

    Jeon, Jihyoun; Hsu, Li; Gorfine, Malka

    2012-07-01

    Frailty models are useful for measuring unobserved heterogeneity in risk of failures across clusters, providing cluster-specific risk prediction. In a frailty model, the latent frailties shared by members within a cluster are assumed to act multiplicatively on the hazard function. In order to obtain parameter and frailty variate estimates, we consider the hierarchical likelihood (H-likelihood) approach (Ha, Lee and Song, 2001. Hierarchical-likelihood approach for frailty models. Biometrika 88, 233-243) in which the latent frailties are treated as "parameters" and estimated jointly with other parameters of interest. We find that the H-likelihood estimators perform well when the censoring rate is low, however, they are substantially biased when the censoring rate is moderate to high. In this paper, we propose a simple and easy-to-implement bias correction method for the H-likelihood estimators under a shared frailty model. We also extend the method to a multivariate frailty model, which incorporates complex dependence structure within clusters. We conduct an extensive simulation study and show that the proposed approach performs very well for censoring rates as high as 80%. We also illustrate the method with a breast cancer data set. Since the H-likelihood is the same as the penalized likelihood function, the proposed bias correction method is also applicable to the penalized likelihood estimators.

  4. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  5. A method for interactive satellite failure diagnosis: Towards a connectionist solution

    NASA Technical Reports Server (NTRS)

    Bourret, P.; Reggia, James A.

    1989-01-01

    Various kinds of processes which allow one to make a diagnosis are analyzed. The analyses then focuses on one of these processes used for satellite failure diagnosis. This process consists of sending the satellite instructions about system status alterations: to mask the effects of one possible component failure or to look for additional abnormal measures. A formal model of this process is given. This model is an extension of a previously defined connectionist model which allows computation of ratios between the likelihoods of observed manifestations according to various diagnostic hypotheses. The expected mean value of these likelihood measures for each possible status of the satellite can be computed in a similar way. Therefore, it is possible to select the most appropriate status according to three different purposes: to confirm an hypothesis, to eliminate an hypothesis, or to choose between two hypotheses. Finally, a first connectionist schema of computation of these expected mean values is given.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Shang Yu; Babak, Stanislav

    Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2more » and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.« less

  7. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa.

    PubMed

    Johnston, Heidi Bart; Ganatra, Bela; Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Lema, Hailu Yeneneh; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Diagnostic accuracy study. Ethiopia, India and South Africa. Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability.

  8. Fatty liver index and hepatic steatosis index for prediction of non-alcoholic fatty liver disease in type 1 diabetes.

    PubMed

    Sviklāne, Laura; Olmane, Evija; Dzērve, Zane; Kupčs, Kārlis; Pīrāgs, Valdis; Sokolovska, Jeļizaveta

    2018-01-01

    Little is known about the diagnostic value of hepatic steatosis index (HSI) and fatty liver index (FLI), as well as their link to metabolic syndrome in type 1 diabetes mellitus. We have screened the effectiveness of FLI and HSI in an observational pilot study of 40 patients with type 1 diabetes. FLI and HSI were calculated for 201 patients with type 1 diabetes. Forty patients with FLI/HSI values corresponding to different risk of liver steatosis were invited for liver magnetic resonance study. In-phase/opposed-phase technique of magnetic resonance was used. Accuracy of indices was assessed from the area under the receiver operating characteristic curve. Twelve (30.0%) patients had liver steatosis. For FLI, sensitivity was 90%; specificity, 74%; positive likelihood ratio, 3.46; negative likelihood ratio, 0.14; positive predictive value, 0.64; and negative predictive value, 0.93. For HSI, sensitivity was 86%; specificity, 66%; positive likelihood ratio, 1.95; negative likelihood ratio, 0.21; positive predictive value, 0.50; and negative predictive value, 0.92. Area under the receiver operating characteristic curve for FLI was 0.86 (95% confidence interval [0.72; 0.99]); for HSI 0.75 [0.58; 0.91]. Liver fat correlated with liver enzymes, waist circumference, triglycerides, and C-reactive protein. FLI correlated with C-reactive protein, liver enzymes, and blood pressure. HSI correlated with waist circumference and C-reactive protein. FLI ≥ 60 and HSI ≥ 36 were significantly associated with metabolic syndrome and nephropathy. The tested indices, especially FLI, can serve as surrogate markers for liver fat content and metabolic syndrome in type 1 diabetes. © 2017 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  9. Sentiment analysis of feature ranking methods for classification accuracy

    NASA Astrophysics Data System (ADS)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.

  10. Reliability and Validity of the New Tanaka B Intelligence Scale Scores: A Group Intelligence Test

    PubMed Central

    Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio

    2014-01-01

    Objective The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. Methods The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2±0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach’s alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. Results The Cronbach’s alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85–0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9–48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03–0.4). Thus, intellectual disability could be ruled out or determined. Conclusion The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult. PMID:24940880

  11. Early pregnancy angiogenic markers and spontaneous abortion: an Odense Child Cohort study.

    PubMed

    Andersen, Louise B; Dechend, Ralf; Karumanchi, S Ananth; Nielsen, Jan; Joergensen, Jan S; Jensen, Tina K; Christesen, Henrik T

    2016-11-01

    Spontaneous abortion is the most commonly observed adverse pregnancy outcome. The angiogenic factors soluble Fms-like kinase 1 and placental growth factor are critical for normal pregnancy and may be associated to spontaneous abortion. We investigated the association between maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor, and subsequent spontaneous abortion. In the prospective observational Odense Child Cohort, 1676 pregnant women donated serum in early pregnancy, gestational week <22 (median 83 days of gestation, interquartile range 71-103). Concentrations of soluble Fms-like kinase 1 and placental growth factor were determined with novel automated assays. Spontaneous abortion was defined as complete or incomplete spontaneous abortion, missed abortion, or blighted ovum <22+0 gestational weeks, and the prevalence was 3.52% (59 cases). The time-dependent effect of maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor on subsequent late first-trimester or second-trimester spontaneous abortion (n = 59) was evaluated using a Cox proportional hazards regression model, adjusting for body mass index, parity, season of blood sampling, and age. Furthermore, receiver operating characteristics were employed to identify predictive values and optimal cut-off values. In the adjusted Cox regression analysis, increasing continuous concentrations of both soluble Fms-like kinase 1 and placental growth factor were significantly associated with a decreased hazard ratio for spontaneous abortion: soluble Fms-like kinase 1, 0.996 (95% confidence interval, 0.995-0.997), and placental growth factor, 0.89 (95% confidence interval, 0.86-0.93). When analyzed by receiver operating characteristic cut-offs, women with soluble Fms-like kinase 1 <742 pg/mL had an odds ratio for spontaneous abortion of 12.1 (95% confidence interval, 6.64-22.2), positive predictive value of 11.70%, negative predictive value of 98.90%, positive likelihood ratio of 3.64 (3.07-4.32), and negative likelihood ratio of 0.30 (0.19-0.48). For placental growth factor <19.7 pg/mL, odds ratio was 13.2 (7.09-24.4), positive predictive value was 11.80%, negative predictive value was 99.0%, positive likelihood ratio was 3.68 (3.12-4.34), and negative likelihood ratio was 0.28 (0.17-0.45). In the sensitivity analysis of 54 spontaneous abortions matched 1:4 to controls on gestational age at blood sampling, the highest area under the curve was seen for soluble Fms-like kinase 1 in prediction of first-trimester spontaneous abortion, 0.898 (0.834-0.962), and at the optimum cut-off of 725 pg/mL, negative predictive value was 51.4%, positive predictive value was 94.6%, positive likelihood ratio was 4.04 (2.57-6.35), and negative likelihood ratio was 0.22 (0.09-0.54). A strong, novel prospective association was identified between lower concentrations of soluble Fms-like kinase 1 and placental growth factor measured in early pregnancy and spontaneous abortion. A soluble Fms-like kinase 1 cut-off <742 pg/mL in maternal serum was optimal to stratify women at high vs low risk of spontaneous abortion. The cause and effect of angiogenic factor alterations in spontaneous abortions remain to be elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Head-to-head comparison of the diagnostic performance of coronary computed tomography angiography and dobutamine-stress echocardiography in the evaluation of acute chest pain with normal ECG findings and negative troponin tests: A prospective multicenter study.

    PubMed

    Durand, Eric; Bauer, Fabrice; Mansencal, Nicolas; Azarine, Arshid; Diebold, Benoit; Hagege, Albert; Perdrix, Ludivine; Gilard, Martine; Jobic, Yannick; Eltchaninoff, Hélène; Bensalah, Mourad; Dubourg, Benjamin; Caudron, Jérôme; Niarra, Ralph; Chatellier, Gilles; Dacher, Jean-Nicolas; Mousseaux, Elie

    2017-08-15

    To perform a head-to-head comparison of coronary CT angiography (CCTA) and dobutamine-stress echocardiography (DSE) in patients presenting recent chest pain when troponin and ECG are negative. Two hundred seventeen patients with recent chest pain, normal ECG findings, and negative troponin were prospectively included in this multicenter study and were scheduled for CCTA and DSE. Invasive coronary angiography (ICA), was performed in patients when either DSE or CCTA was considered positive or when both were non-contributive or in case of recurrent chest pain during 6month follow-up. The presence of coronary artery stenosis was defined as a luminal obstruction >50% diameter in any coronary segment at ICA. ICA was performed in 75 (34.6%) patients. Coronary artery stenosis was identified in 37 (17%) patients. For CCTA, the sensitivity was 96.9% (95% CI 83.4-99.9), specificity 48.3% (29.4-67.5), positive likelihood ratio 2.06 (95% CI 1.36-3.11), and negative likelihood ratio 0.07 (95% CI 0.01-0.52). The sensitivity of DSE was 51.6% (95% CI 33.1-69.9), specificity 46.7% (28.3-65.7), positive likelihood ratio 1.03 (95% CI 0.62-1.72), and negative likelihood ratio 1.10 (95% CI 0.63-1.93). The CCTA: DSE ratio of true-positive and false-positive rates was 1.70 (95% CI 1.65-1.75) and 1.00 (95% CI 0.91-1.09), respectively, when non-contributive CCTA and DSE were both considered positive. Only one missed acute coronary syndrome was observed at six months. CCTA has higher diagnostic performance than DSE in the evaluation of patients with recent chest pain, normal ECG findings, and negative troponine to exclude coronary artery disease. Copyright © 2017. Published by Elsevier B.V.

  13. Diagnostic value of 18F-FDG-PET/CT for the evaluation of solitary pulmonary nodules: a systematic review and meta-analysis.

    PubMed

    Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian

    2017-01-01

    To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.

  14. Diagnostic performance of coronary computed tomography angiography versus exercise electrocardiography for coronary artery disease: a systematic review and meta-analysis

    PubMed Central

    Yin, Xinxin; Zheng, Wen; Ma, Jingjing; Hao, Panpan

    2016-01-01

    Background Both coronary computed tomography angiography (CCTA) and exercise electrocardiography (ExECG) are non-invasive testing methods for the evaluation of coronary artery disease (CAD). However, there was controversy on the diagnostic performance of these methods due to the limited data in each single study. Therefore, we performed a meta-analysis to address these issues. Methods We searched PubMed and Embase databases up to May 22, 2015. Two authors identified eligible studies, extracted data and accessed quality. Pooled estimation of sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), summary receiver-operating characteristic curve (SROC) and the area under curve (AUC) of CCTA and ExECG for the diagnosis of CAD were calculated using Stata, Meta-Disc and Review Manager statistical software. Results Seven articles were included. Pooled sensitivity of CCTA and ExECG were 0.98 [95% confidence intervals (CIs): 0.95–0.99] and 0.66 (95% CIs: 0.59–0.72); pooled specificity of CCTA and ExECG were 0.84 (95% CIs: 0.81–0.87) and 0.75 (95% CIs: 0.71–0.79); pooled DOR of CCTA and ExECG were 110.24 (95% CIs: 35.07–346.55) and 6.28 (95% CIs: 2.06–19.13); and AUC of CCTA and ExECG were 0.9950±0.0046 and 0.7727±0.0638, respectively. There is no heterogeneity caused by threshold effect in CCTA or ExECG analysis. The Deeks’ test showed no potential publication bias (P=0.17). Conclusions CCTA has better diagnostic performance than ExECG in the evaluation of CAD, which can provide a better solution for the clinical problem of the diagnosis for CAD. PMID:27499958

  15. Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2012-01-01

    We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.

  16. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic.

    PubMed

    Yokoyama, Jun'ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student's t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case.

  17. Relationship Formation and Stability in Emerging Adulthood: Do Sex Ratios Matter?

    ERIC Educational Resources Information Center

    Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.

    2011-01-01

    Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage (transitions in and out of dating or cohabiting relationships) is unknown. Utilizing data from the Toledo Adolescent Relationships Study and the 2000 U.S. Census, this study assesses whether sex ratios…

  18. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  19. The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.

    PubMed

    Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R

    2013-01-01

    In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.

  20. Clinical evaluation of cobas core anti-dsDNA EIA quant.

    PubMed

    González, Concepción; Guevara, Paloma; García-Berrocal, Belén; Alejandro Navajo, José; Manuel González-Buitrago, José

    2004-01-01

    The measurement of antibodies to double-stranded DNA (anti-dsDNA) is a useful tool for the diagnosis and monitoring of patients with connective tissue diseases, particularly systemic lupus erythematosus (SLE). The aim of the present study was to compare a new enzyme-linked immunosorbent assay (ELISA) for the measurement of anti-dsDNA antibodies, which uses purified double-stranded plasmid DNA as the antigen (anti-dsDNA EIA Quant; Roche Diagnostics, Mannheim, Germany), with an established ELISA. The clinical usefulness of this new ELISA was also assessed. We measured anti-dsDNA antibodies in 398 serum samples that were divided into four groups: 1). routine samples sent to our laboratory for an antinuclear antibody (ANA) test (n=229), 2). samples from blood donors (n=74), 3). samples from patients with SLE (n=48), and 4) samples from patients with other autoimmune diseases (n=47). The methods used were the Cobas Core Anti-dsDNA EIA Quant (Roche Diagnostics, Mannheim, Germany) and the Anti-dsDNA test (Gull Diagnostics, Bois d'Arcy, France). We obtained a kappa index and Spearman correlation coefficient in the comparative study, and sensitivity, specificity, predictive values, and likelihood ratios in the clinical study. The results obtained show a good agreement between the two methods in both the qualitative results (kappa=0.91) and the quantitative data (r=0.854). The best accuracy, predictive values, likelihood ratios, and correlation with active disease were obtained with the Roche anti-dsDNA assay. Copyright 2004 Wiley-Liss, Inc.

  1. Evaluation of a diagnostic flow chart applying medical thoracoscopy, adenosine deaminase and T-SPOT.TB in diagnosis of tuberculous pleural effusion.

    PubMed

    He, Y; Zhang, W; Huang, T; Wang, X; Wang, M

    2015-10-01

    To evaluate a diagnostic flow chart applying medical thoracoscoy (MT), adenosine deaminase (ADA) and T-SPOT.TB in diagnosis of tuberculous pleural effusion (TPE) at a high TB burden country. 136 patients with pleural effusion (PE) were enrolled and divided into TPE and Non-TPE group. MT (histology), PE ADA and T-SPOT.TB were conducted on all patients. ROC analysis was performed for the best cut-off value of PE ADA in detection of TPE. The diagnostic flow chart applying MT, ADA and T-SPOT.TB was evaluated for improving the limitations of each diagnostic method. ROC analysis showed that the best cut-off value of PE ADA was 30U/L. The sensitivity and specificity of these tests were calculated respectively to be: 71.4% (58.5%-81.6%) and 100% (95.4-100.0%) for MT, 92.9% (83.0-97.2%) and 68.8% (57.9-77.9%) for T-SPOT.TB, and 80.0% (69.6-88.1%) and 92.9% (82.7-98.0%) for PE ADA. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, positive predictive value and negative predictive value of the diagnostic flow chart were 96.4% (87.9-99.0%), 96.3% (89.6-98.7%), 25.714, 0.037, 97.4 and 94.9, respectively. The diagnostic flow chart applying MT, ADA and T-SPOT.TB is an accurate and rapid diagnostic method in detection of TPE.

  2. Muon identification with Muon Telescope Detector at the STAR experiment

    NASA Astrophysics Data System (ADS)

    Huang, T. C.; Ma, R.; Huang, B.; Huang, X.; Ruan, L.; Todoroki, T.; Xu, Z.; Yang, C.; Yang, S.; Yang, Q.; Yang, Y.; Zha, W.

    2016-10-01

    The Muon Telescope Detector (MTD) is a newly installed detector in the STAR experiment. It provides an excellent opportunity to study heavy quarkonium physics using the dimuon channel in heavy ion collisions. In this paper, we report the muon identification performance for the MTD using proton-proton collisions at √{ s }=500 GeV with various methods. The result using the Likelihood Ratio method shows that the muon identification efficiency can reach up to ∼90% for muons with transverse momenta greater than 3 GeV/c and the significance of the J / ψ signal is improved by a factor of 2 compared to using the basic selection.

  3. Adaptive channel estimation for soft decision decoding over non-Gaussian optical channel

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Miao, Tao-tao; Huang, Sheng; Liu, Huan-lin

    2016-10-01

    An adaptive priori likelihood ratio (LLR) estimation method is proposed over non-Gaussian channel in the intensity modulation/direct detection (IM/DD) optical communication systems. Using the nonparametric histogram and the weighted least square linear fitting in the tail regions, the LLR is estimated and used for the soft decision decoding of the low-density parity-check (LDPC) codes. This method can adapt well to the three main kinds of intensity modulation/direct detection (IM/DD) optical channel, i.e., the chi-square channel, the Webb-Gaussian channel and the additive white Gaussian noise (AWGN) channel. The performance penalty of channel estimation is neglected.

  4. Lake bed classification using acoustic data

    USGS Publications Warehouse

    Yin, Karen K.; Li, Xing; Bonde, John; Richards, Carl; Cholwek, Gary

    1998-01-01

    As part of our effort to identify the lake bed surficial substrates using remote sensing data, this work designs pattern classifiers by multivariate statistical methods. Probability distribution of the preprocessed acoustic signal is analyzed first. A confidence region approach is then adopted to improve the design of the existing classifier. A technique for further isolation is proposed which minimizes the expected loss from misclassification. The devices constructed are applicable for real-time lake bed categorization. A mimimax approach is suggested to treat more general cases where the a priori probability distribution of the substrate types is unknown. Comparison of the suggested methods with the traditional likelihood ratio tests is discussed.

  5. New applications of maximum likelihood and Bayesian statistics in macromolecular crystallography.

    PubMed

    McCoy, Airlie J

    2002-10-01

    Maximum likelihood methods are well known to macromolecular crystallographers as the methods of choice for isomorphous phasing and structure refinement. Recently, the use of maximum likelihood and Bayesian statistics has extended to the areas of molecular replacement and density modification, placing these methods on a stronger statistical foundation and making them more accurate and effective.

  6. Characterization of Chronic Aortic and Mitral Regurgitation Undergoing Valve Surgery Using Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Gao, Sinsia A; Johnsson, Åse A; Lagerstrand, Kerstin M; Bech-Hanssen, Odd

    2017-06-15

    Grading of chronic aortic regurgitation (AR) and mitral regurgitation (MR) by cardiovascular magnetic resonance (CMR) is currently based on thresholds, which are neither modality nor quantification method specific. Accordingly, this study sought to identify CMR-specific and quantification method-specific thresholds for regurgitant volumes (RVols), RVol indexes, and regurgitant fractions (RFs), which denote severe chronic AR or MR with an indication for surgery. The study comprised patients with moderate and severe chronic AR (n = 38) and MR (n = 40). Echocardiography and CMR was performed at baseline and in all operated AR/MR patients (n = 23/25) 10 ± 1 months after surgery. CMR quantification of AR: direct (aortic flow) and indirect method (left ventricular stroke volume [LVSV] - pulmonary stroke volume [PuSV]); MR: 2 indirect methods (LVSV - aortic forward flow [AoFF]; mitral inflow [MiIF] - AoFF). All operated patients had severe regurgitation and benefited from surgery, indicated by a significant postsurgical reduction in end-diastolic volume index and improvement or relief of symptoms. The discriminatory ability between moderate and severe AR was strong for RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (direct method) and RVol >62 ml, RVol index >31 ml/m 2 , and RF >36% (LVSV-PuSV) with a negative likelihood ratio ≤ 0.2. In MR, the discriminatory ability was very strong for RVol >64 ml, RVol index >32 ml/m 2 , and RF >41% (LVSV-AoFF) and RVol >40 ml, RVol index >20 ml/m 2 , and RF >30% (MiIF-AoFF) with a negative likelihood ratio < 0.1. In conclusion, CMR grading of chronic AR and MR should be based on modality-specific and quantification method-specific thresholds, as they differ largely from recognized guideline criteria, to assure appropriate clinical decision-making and timing of surgery. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Screening tests for aphasia in patients with stroke: a systematic review.

    PubMed

    El Hachioui, Hanane; Visch-Brink, Evy G; de Lau, Lonneke M L; van de Sandt-Koenderman, Mieke W M E; Nouwens, Femke; Koudstaal, Peter J; Dippel, Diederik W J

    2017-02-01

    Aphasia has a large impact on the quality of life and adds significantly to the costs of stroke care. Early recognition of aphasia in stroke patients is important for prognostication and well-timed treatment planning. We aimed to identify available screening tests for differentiating between aphasic and non-aphasic stroke patients, and to evaluate test accuracy, reliability, and feasibility. We searched PubMed, EMbase, Web of Science, and PsycINFO for published studies on screening tests aimed at assessing aphasia in stroke patients. The reference lists of the selected articles were scanned, and several experts were contacted to detect additional references. Of each screening test, we estimated the sensitivity, specificity, likelihood ratio of a positive test, likelihood ratio of a negative test, and diagnostic odds ratio (DOR), and rated the degree of bias of the validation method. We included ten studies evaluating eight screening tests. There was a large variation across studies regarding sample size, patient characteristics, and reference tests used for validation. Many papers failed to report on the consecutiveness of patient inclusion, time between aphasia onset and administration of the screening test, and blinding. Of the three studies that were rated as having an intermediate or low risk of bias, the DOR was highest for the Language Screening Test and ScreeLing. Several screening tools for aphasia in stroke are available, but many tests have not been verified properly. Methodologically sound validation studies of aphasia screening tests are needed to determine their usefulness in clinical practice.

  8. Impact of a diagnosis-related group payment system on cesarean section in Korea.

    PubMed

    Kim, Seung Ju; Han, Kyu-Tae; Kim, Sun Jung; Park, Eun-Cheol; Park, Hye Ki

    2016-06-01

    Cesarean sections (CSs) are the most expensive method of delivery, which may affect the physician's choice of treatment when providing health services to patients. We investigated the effects of the diagnosis-related group (DRG)-based payment system on CSs in Korea. We used National Health Insurance claim data from 2011 to 2014, which included 1,289,989 delivery cases at 674 hospitals. We used a generalized estimating equation model to evaluate the association between the likelihood of cesarean delivery and the length of the DRG adoption period. A total of 477,309 (37.0%) delivery cases were performed by CSs. We found that a longer DRG adoption period was associated with a lower odds ratio of CSs (odds ratio [OR]: 0.997, 95% CI: 0.996-0.998). In addition, a longer DRG adoption period was associated with a lower odds ratio for CSs in hospitals that had voluntarily adopted the DRG system. Similar results were also observed for urban hospitals, primiparas, and those under 28 years old and over 33 years old. Our results suggest that the change in the reimbursement system was associated with a low likelihood of CSs. The impact of DRG adoption on cesarean delivery can also be expected to increase with time, as our finding provides evidence that the reimbursement system is associated with the health provider's decision to provide health services for patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. flowVS: channel-specific variance stabilization in flow cytometry.

    PubMed

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.

  10. Mixture Analysis and Mammalian Sex Ratio Among Middle Pleistocene Mouflon of Arago Cave, France

    NASA Astrophysics Data System (ADS)

    Monchot, Hervé

    1999-09-01

    In archaeological studies, it is often important to be able assess sexual dimorphism and sex ratios in populations. Obtaining sex ratio is easy if each individual in the population can be accurately sexed through the use of one more objective variables. But this is often impossible, due to incompleteness of the osteological record. A modern statistical approach to handle this problem is Mixture Analysis using the method of maximum likelihood. It consists of determining how many groups are present in the sample, two in this case, in which proportions they occur, and to estimate the parameters accordingly. This paper shows the use of this method on vertebrate fossil populations in a prehistoric context with implications on prey acquisition by early humans. For instance, the analysis of mouflon bones from Arago cave (Tautavel, France) indicates that there are more females than males in the F layer. According to the ethology of the animal, this indicates that the hunting strategy could be the result of selective choice of the prey. Moreover, we may deduce the presence of Anteneandertalians on the site during spring and summer periods.

  11. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  12. Addressing Data Analysis Challenges in Gravitational Wave Searches Using the Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Weerathunga, Thilina Shihan

    2017-08-01

    Gravitational waves are a fundamental prediction of Einstein's General Theory of Relativity. The first experimental proof of their existence was provided by the Nobel Prize winning discovery by Taylor and Hulse of orbital decay in a binary pulsar system. The first detection of gravitational waves incident on earth from an astrophysical source was announced in 2016 by the LIGO Scientific Collaboration, launching the new era of gravitational wave (GW) astronomy. The signal detected was from the merger of two black holes, which is an example of sources called Compact Binary Coalescences (CBCs). Data analysis strategies used in the search for CBC signals are derivatives of the Maximum-Likelihood (ML) method. The ML method applied to data from a network of geographically distributed GW detectors--called fully coherent network analysis--is currently the best approach for estimating source location and GW polarization waveforms. However, in the case of CBCs, especially for lower mass systems (O(1M solar masses)) such as double neutron star binaries, fully coherent network analysis is computationally expensive. The ML method requires locating the global maximum of the likelihood function over a nine dimensional parameter space, where the computation of the likelihood at each point requires correlations involving O(104) to O(106) samples between the data and the corresponding candidate signal waveform template. Approximations, such as semi-coherent coincidence searches, are currently used to circumvent the computational barrier but incur a concomitant loss in sensitivity. We explored the effectiveness of Particle Swarm Optimization (PSO), a well-known algorithm in the field of swarm intelligence, in addressing the fully coherent network analysis problem. As an example, we used a four-detector network consisting of the two LIGO detectors at Hanford and Livingston, Virgo and Kagra, all having initial LIGO noise power spectral densities, and show that PSO can locate the global maximum with less than 240,000 likelihood evaluations for a component mass range of 1.0 to 10.0 solar masses at a realistic coherent network signal to noise ratio of 9.0. Our results show that PSO can successfully deliver a fully-coherent all-sky search with < (1/10 ) the number of likelihood evaluations needed for a grid-based search. Used as a follow-up step, the savings in the number of likelihood evaluations may also reduce latency in obtaining ML estimates of source parameters in semi-coherent searches.

  13. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less

  14. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  15. On the occurrence of false positives in tests of migration under an isolation with migration model

    PubMed Central

    Hey, Jody; Chung, Yujin; Sethuraman, Arun

    2015-01-01

    The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794

  16. Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.

    PubMed

    Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C

    2017-12-01

    The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.

  17. Mental Health Recovery in the Patient-Centered Medical Home

    PubMed Central

    Aarons, Gregory A.; O’Connell, Maria; Davidson, Larry; Groessl, Erik J.

    2015-01-01

    Objectives. We examined the impact of transitioning clients from a mental health clinic to a patient-centered medical home (PCMH) on mental health recovery. Methods. We drew data from a large US County Behavioral Health Services administrative data set. We used propensity score analysis and multilevel modeling to assess the impact of the PCMH on mental health recovery by comparing PCMH participants (n = 215) to clients receiving service as usual (SAU; n = 22 394) from 2011 to 2013 in San Diego County, California. We repeatedly assessed mental health recovery over time (days since baseline assessment range = 0–1639; mean = 186) with the Illness Management and Recovery (IMR) scale and Recovery Markers Questionnaire. Results. For total IMR (log-likelihood ratio χ2[1] = 4696.97; P < .001) and IMR Factor 2 Management scores (log-likelihood ratio χ2[1] = 7.9; P = .005), increases in mental health recovery over time were greater for PCMH than SAU participants. Increases on all other measures over time were similar for PCMH and SAU participants. Conclusions. Greater increases in mental health recovery over time can be expected when patients with severe mental illness are provided treatment through the PCMH. Evaluative efforts should be taken to inform more widespread adoption of the PCMH. PMID:26180945

  18. A flexible spatial scan statistic with a restricted likelihood ratio for detecting disease clusters.

    PubMed

    Tango, Toshiro; Takahashi, Kunihiko

    2012-12-30

    Spatial scan statistics are widely used tools for detection of disease clusters. Especially, the circular spatial scan statistic proposed by Kulldorff (1997) has been utilized in a wide variety of epidemiological studies and disease surveillance. However, as it cannot detect noncircular, irregularly shaped clusters, many authors have proposed different spatial scan statistics, including the elliptic version of Kulldorff's scan statistic. The flexible spatial scan statistic proposed by Tango and Takahashi (2005) has also been used for detecting irregularly shaped clusters. However, this method sets a feasible limitation of a maximum of 30 nearest neighbors for searching candidate clusters because of heavy computational load. In this paper, we show a flexible spatial scan statistic implemented with a restricted likelihood ratio proposed by Tango (2008) to (1) eliminate the limitation of 30 nearest neighbors and (2) to have surprisingly much less computational time than the original flexible spatial scan statistic. As a side effect, it is shown to be able to detect clusters with any shape reasonably well as the relative risk of the cluster becomes large via Monte Carlo simulation. We illustrate the proposed spatial scan statistic with data on mortality from cerebrovascular disease in the Tokyo Metropolitan area, Japan. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Osteoporosis, vitamin C intake, and physical activity in Korean adults aged 50 years and over

    PubMed Central

    Kim, Min Hee; Lee, Hae-Jeung

    2016-01-01

    [Purpose] To investigate associations between vitamin C intake, physical activity, and osteoporosis among Korean adults aged 50 and over. [Subjects and Methods] This study was based on bone mineral density measurement data from the 2008 to 2011 Korean National Health and Nutritional Examination Survey. The study sample comprised 3,047 subjects. The normal group was defined as T-score ≥ −1.0, and the osteoporosis group as T-score ≤ −2.5. The odds ratios for osteoporosis were assessed by logistic regression of each vitamin C intake quartile. [Results] Compared to the lowest quartile of vitamin C intake, the other quartiles showed a lower likelihood of osteoporosis after adjusting for age and gender. In the multi-variate model, the odds ratio for the likelihood of developing osteoporosis in the non-physical activity group significantly decreased to 0.66, 0.57, and 0.46 (p for trend = 0.0046). However, there was no significant decrease (0.98, 1.00, and 0.97) in the physical activity group. [Conclusion] Higher vitamin C intake levels were associated with a lower risk of osteoporosis in Korean adults aged over 50 with low levels of physical activity. However, no association was seen between vitamin C intake and osteoporosis risk in those with high physical activity levels. PMID:27134348

  20. Assessing the Performance of Medical Personnel Involved in the Diagnostic Imaging Processes in Mulago Hospital, Kampala, Uganda

    PubMed Central

    Kawooya, Michael G.; Pariyo, George; Malwadde, Elsie Kiguli; Byanyima, Rosemary; Kisembo, Harrient

    2012-01-01

    Objectives: Uganda, has limited health resources and improving performance of personnel involved in imaging is necessary for efficiency. The objectives of the study were to develop and pilot imaging user performance indices, document non-tangible aspects of performance, and propose ways of improving performance. Materials and Methods: This was a cross-sectional survey employing triangulation methodology, conducted in Mulago National Referral Hospital over a period of 3 years from 2005 to 2008. The qualitative study used in-depth interviews, focus group discussions, and self-administered questionnaires, to explore clinicians’ and radiologists’ performancerelated views. Results: The study came up with following indices: appropriate service utilization (ASU), appropriateness of clinician's nonimaging decisions (ANID), and clinical utilization of imaging results (CUI). The ASU, ANID, and CUI were: 94%, 80%, and 97%, respectively. The clinician's requisitioning validity was high (positive likelihood ratio of 10.6) contrasting with a poor validity for detecting those patients not needing imaging (negative likelihood ratio of 0.16). Some requisitions were inappropriate and some requisition and reports lacked detail, clarity, and precision. Conclusion: Clinicians perform well at imaging requisition-decisions but there are issues in imaging requisitioning and reporting that need to be addressed to improve performance. PMID:23230543

  1. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    A general iterative procedure is given for determining the consistent maximum likelihood estimates of normal distributions. In addition, a local maximum of the log-likelihood function, Newtons's method, a method of scoring, and modifications of these procedures are discussed.

  2. The Blue Arc Entoptic Phenomenon in Glaucoma (An American Ophthalmological Thesis)

    PubMed Central

    Pasquale, Louis R.; Brusie, Steven

    2013-01-01

    Purpose: To determine whether the blue arc entoptic phenomenon, a positive visual response originating from the retina with a shape that conforms to the topology of the nerve fiber layer, is depressed in glaucoma. Methods: We recruited a cross-sectional, nonconsecutive sample of 202 patients from a single institution in a prospective manner. Subjects underwent full ophthalmic examination, including standard automated perimetry (Humphrey Visual Field 24–2) or frequency doubling technology (Screening C 20–5) perimetry. Eligible patients viewed computer-generated stimuli under conditions chosen to optimize perception of the blue arcs. Unmasked testers instructed patients to report whether they were able to perceive blue arcs but did not reveal what response was expected. We created multivariable logistic regression models to ascertain the demographic and clinical parameters associated with perceiving the blue arcs. Results: In multivariable analyses, each 0.1 unit increase in cup-disc ratio was associated with 36% reduced likelihood of perceiving the blue arcs (odds ratio [OR] = 0.66 [95% confidence interval (CI): 0.53–0.83], P<.001). A smaller mean defect was associated with an increased likelihood of perceiving the blue arcs (OR=1.79 [95% CI: 1.40–2.28]); P<.001), while larger pattern standard deviation (OR=0.72 [95% CI: 0.57–0.91]; P=.005) and abnormal glaucoma hemifield test (OR=0.25 [0.10–0.65]; P=.006) were associated with a reduced likelihood of perceiving them. Older age and media opacity were also associated with an inability to perceive the blue arcs. Conclusion: In this study, the inability to perceive the blue arcs correlated with structural and functional features associated with glaucoma, although older age and media opacity were also predictors of this entoptic response. PMID:24167324

  3. Screening for Depression in Medical Settings with the Patient Health Questionnaire (PHQ): A Diagnostic Meta-Analysis

    PubMed Central

    Richards, David; Brealey, Stephen; Hewitt, Catherine

    2007-01-01

    Objective To summarize the psychometric properties of the PHQ2 and PHQ9 as screening instruments for depression. Interventions We identified 17 validation studies conducted in primary care; medical outpatients; and specialist medical services (cardiology, gynecology, stroke, dermatology, head injury, and otolaryngology). Electronic databases from 1994 to February 2007 (MEDLINE, PsycLIT, EMBASE, CINAHL, Cochrane registers) plus study reference lists have been used for this study. Translations included US English, Dutch, Italian, Spanish, German and Arabic). Summary sensitivity, specificity, likelihood and diagnostic odds ratios (OR) against a gold standard (DSM-IV) Major Depressive Disorder (MDD) were calculated for each study. We used random effects bivariate meta-analysis at recommended cut points to produce summary receiver–operator characteristic (sROC) curves. We explored heterogeneity with metaregression. Measurements and Main Results Fourteen studies (5,026 participants) validated the PHQ9 against MDD: sensitivity = 0.80 (95% CI 0.71–0.87); specificity = 0.92 (95% CI 0.88–0.95); positive likelihood ratio = 10.12 (95% CI 6.52–15.67); negative likelihood ratio = 0.22 (0.15 to 0.32). There was substantial heterogeneity (Diagnostic Odds Ratio heterogeneity I2 = 82%), which was not explained by study setting (primary care versus general hospital); method of scoring (cutoff ≥ 10 versus “diagnostic algorithm”); or study quality (blinded versus unblinded). The diagnostic validity of the PHQ2 was only validated in 3 studies and showed wide variability in sensitivity. Conclusions The PHQ9 is acceptable, and as good as longer clinician-administered instruments in a range of settings, countries, and populations. More research is needed to validate the PHQ2 to see if its diagnostic properties approach those of the PHQ9. PMID:17874169

  4. Neonatal Intensive Care Unit Census Influences Discharge of Moderately Preterm Infants

    PubMed Central

    Profit, Jochen; McCormick, Marie C.; Escobar, Gabriel J.; Richardson, Douglas K.; Zheng, Zheng; Coleman-Phox, Kim; Roberts, Rebecca; Zupancic, John A. F.

    2011-01-01

    Objective The timely discharge of moderately premature infants has important economic implications. The decision to discharge should occur independent of unit census. We evaluated the impact of unit census on the decision to discharge moderately preterm infants. Design/Methods In a prospective multicenter cohort study, we enrolled 850 infants born between 30 and 34 weeks' gestation at 10 NICUs in Massachusetts and California. We divided the daily census from each hospital into quintiles and tested whether discharges were evenly distributed among them. Using logistic regression, we analyzed predictors of discharge within census quintiles associated with a greater- or less-than-expected likelihood of discharge. We then explored parental satisfaction and postdischarge resource consumption in relation to discharge during census periods that were associated with high proportions of discharge. Results There was a significant correlation between unit census and likelihood of discharge. When unit census was in the lowest quintile, patients were 20% less likely to be discharged when compared with all of the other quintiles of unit census. In the lowest quintile of unit census, patient/nurse ratio was the only variable associated with discharge. When census was in the highest quintile, patients were 32% more likely to be discharged when compared with all of the other quintiles of unit census. For patients in this quintile, a higher patient/nurse ratio increased the likelihood of discharge. Conversely, infants with prolonged lengths of stay, an increasing Score for Neonatal Acute Physiology II, and minor congenital anomalies were less likely to be discharged. Infants discharged at high unit census did not differ from their peers in terms of parental satisfaction, emergency department visits, home nurse visits, or rehospitalization rates. Conclusions Discharges are closely correlated with unit census. Providers incorporate demand and case mix into their discharge decisions. PMID:17272621

  5. Experimental congruence of interval scale production from paired comparisons and ranking for image evaluation

    NASA Astrophysics Data System (ADS)

    Handley, John C.; Babcock, Jason S.; Pelz, Jeff B.

    2003-12-01

    Image evaluation tasks are often conducted using paired comparisons or ranking. To elicit interval scales, both methods rely on Thurstone's Law of Comparative Judgment in which objects closer in psychological space are more often confused in preference comparisons by a putative discriminal random process. It is often debated whether paired comparisons and ranking yield the same interval scales. An experiment was conducted to assess scale production using paired comparisons and ranking. For this experiment a Pioneer Plasma Display and Apple Cinema Display were used for stimulus presentation. Observers performed rank order and paired comparisons tasks on both displays. For each of five scenes, six images were created by manipulating attributes such as lightness, chroma, and hue using six different settings. The intention was to simulate the variability from a set of digital cameras or scanners. Nineteen subjects, (5 females, 14 males) ranging from 19-51 years of age participated in this experiment. Using a paired comparison model and a ranking model, scales were estimated for each display and image combination yielding ten scale pairs, ostensibly measuring the same psychological scale. The Bradley-Terry model was used for the paired comparisons data and the Bradley-Terry-Mallows model was used for the ranking data. Each model was fit using maximum likelihood estimation and assessed using likelihood ratio tests. Approximate 95% confidence intervals were also constructed using likelihood ratios. Model fits for paired comparisons were satisfactory for all scales except those from two image/display pairs; the ranking model fit uniformly well on all data sets. Arguing from overlapping confidence intervals, we conclude that paired comparisons and ranking produce no conflicting decisions regarding ultimate ordering of treatment preferences, but paired comparisons yield greater precision at the expense of lack-of-fit.

  6. An Informative Interpretation of Decision Theory: The Information Theoretic Basis for Signal-to-Noise Ratio and Log Likelihood Ratio

    DOE PAGES

    Polcari, J.

    2013-08-16

    The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less

  7. A Bayesian estimation of the helioseismic solar age

    NASA Astrophysics Data System (ADS)

    Bonanno, A.; Fröhlich, H.-E.

    2015-08-01

    Context. The helioseismic determination of the solar age has been a subject of several studies because it provides us with an independent estimation of the age of the solar system. Aims: We present the Bayesian estimates of the helioseismic age of the Sun, which are determined by means of calibrated solar models that employ different equations of state and nuclear reaction rates. Methods: We use 17 frequency separation ratios r02(n) = (νn,l = 0-νn-1,l = 2)/(νn,l = 1-νn-1,l = 1) from 8640 days of low-ℓBiSON frequencies and consider three likelihood functions that depend on the handling of the errors of these r02(n) ratios. Moreover, we employ the 2010 CODATA recommended values for Newton's constant, solar mass, and radius to calibrate a large grid of solar models spanning a conceivable range of solar ages. Results: It is shown that the most constrained posterior distribution of the solar age for models employing Irwin EOS with NACRE reaction rates leads to t⊙ = 4.587 ± 0.007 Gyr, while models employing the Irwin EOS and Adelberger, et al. (2011, Rev. Mod. Phys., 83, 195) reaction rate have t⊙ = 4.569 ± 0.006 Gyr. Implementing OPAL EOS in the solar models results in reduced evidence ratios (Bayes factors) and leads to an age that is not consistent with the meteoritic dating of the solar system. Conclusions: An estimate of the solar age that relies on an helioseismic age indicator such as r02(n) turns out to be essentially independent of the type of likelihood function. However, with respect to model selection, abandoning any information concerning the errors of the r02(n) ratios leads to inconclusive results, and this stresses the importance of evaluating the trustworthiness of error estimates.

  8. Nasal Airway Microbiota Profile and Severe Bronchiolitis in Infants: A Case-control Study.

    PubMed

    Hasegawa, Kohei; Linnemann, Rachel W; Mansbach, Jonathan M; Ajami, Nadim J; Espinola, Janice A; Petrosino, Joseph F; Piedra, Pedro A; Stevenson, Michelle D; Sullivan, Ashley F; Thompson, Amy D; Camargo, Carlos A

    2017-11-01

    Little is known about the relationship of airway microbiota with bronchiolitis in infants. We aimed to identify nasal airway microbiota profiles and to determine their association with the likelihood of bronchiolitis in infants. A case-control study was conducted. As a part of a multicenter prospective study, we collected nasal airway samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 110 age-matched healthy controls. By applying 16S ribosomal RNA gene sequencing and an unbiased clustering approach to these 150 nasal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. Overall, the median age was 3 months and 56% were male. Unbiased clustering of airway microbiota identified 4 distinct profiles: Moraxella-dominant profile (37%), Corynebacterium/Dolosigranulum-dominant profile (27%), Staphylococcus-dominant profile (15%) and mixed profile (20%). Proportion of bronchiolitis was lowest in infants with Moraxella-dominant profile (14%) and highest in those with Staphylococcus-dominant profile (57%), corresponding to an odds ratio of 7.80 (95% confidence interval, 2.64-24.9; P < 0.001). In the multivariable model, the association between Staphylococcus-dominant profile and greater likelihood of bronchiolitis persisted (odds ratio for comparison with Moraxella-dominant profile, 5.16; 95% confidence interval, 1.26-22.9; P = 0.03). By contrast, Corynebacterium/Dolosigranulum-dominant profile group had low proportion of infants with bronchiolitis (17%); the likelihood of bronchiolitis in this group did not significantly differ from those with Moraxella-dominant profile in both unadjusted and adjusted analyses. In this case-control study, we identified 4 distinct nasal airway microbiota profiles in infants. Moraxella-dominant and Corynebacterium/Dolosigranulum-dominant profiles were associated with low likelihood of bronchiolitis, while Staphylococcus-dominant profile was associated with high likelihood of bronchiolitis.

  9. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  10. Diagnostic Accuracy of Coronary Computed Tomography Before Aortic Valve Replacement: Systematic Review and Meta-Analysis.

    PubMed

    Chaikriangkrai, Kongkiat; Jhun, Hye Yeon; Shantha, Ghanshyam Palamaner Subash; Abdulhak, Aref Bin; Tandon, Rudhir; Alqasrawi, Musab; Klappa, Anthony; Pancholy, Samir; Deshmukh, Abhishek; Bhama, Jay; Sigurdsson, Gardar

    2018-07-01

    In aortic stenosis patients referred for surgical and transcatheter aortic valve replacement (AVR), the evidence of diagnostic accuracy of coronary computed tomography angiography (CCTA) has been limited. The objective of this study was to investigate the diagnostic accuracy of CCTA for significant coronary artery disease (CAD) in patients referred for AVR using invasive coronary angiography (ICA) as the gold standard. We searched databases for all diagnostic studies of CCTA in patients referred for AVR, which reported diagnostic testing characteristics on patient-based analysis required to pool summary sensitivity, specificity, positive-likelihood ratio, and negative-likelihood ratio. Significant CAD in both CCTA and ICA was defined by >50% stenosis in any coronary artery, coronary stent, or bypass graft. Thirteen studies evaluated 1498 patients (mean age, 74 y; 47% men; 76% transcatheter AVR). The pooled prevalence of significant stenosis determined by ICA was 43%. Hierarchical summary receiver-operating characteristic analysis demonstrated a summary area under curve of 0.96. The pooled sensitivity, specificity, and positive-likelihood and negative-likelihood ratios of CCTA in identifying significant stenosis determined by ICA were 95%, 79%, 4.48, and 0.06, respectively. In subgroup analysis, the diagnostic profiles of CCTA were comparable between surgical and transcatheter AVR. Despite the higher prevalence of significant CAD in patients with aortic stenosis than with other valvular heart diseases, our meta-analysis has shown that CCTA has a suitable diagnostic accuracy profile as a gatekeeper test for ICA. Our study illustrates a need for further study of the potential role of CCTA in preoperative planning for AVR.

  11. Toward the detection of gravitational waves under non-Gaussian noises I. Locally optimal statistic

    PubMed Central

    YOKOYAMA, Jun’ichi

    2014-01-01

    After reviewing the standard hypothesis test and the matched filter technique to identify gravitational waves under Gaussian noises, we introduce two methods to deal with non-Gaussian stationary noises. We formulate the likelihood ratio function under weakly non-Gaussian noises through the Edgeworth expansion and strongly non-Gaussian noises in terms of a new method we call Gaussian mapping where the observed marginal distribution and the two-body correlation function are fully taken into account. We then apply these two approaches to Student’s t-distribution which has a larger tails than Gaussian. It is shown that while both methods work well in the case the non-Gaussianity is small, only the latter method works well for highly non-Gaussian case. PMID:25504231

  12. Extending the Li&Ma method to include PSF information

    NASA Astrophysics Data System (ADS)

    Nievas-Rosillo, M.; Contreras, J. L.

    2016-02-01

    The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.

  13. Effect of Provider Experience on Clinician-Performed Ultrasonography for Hydronephrosis in Patients With Suspected Renal Colic

    PubMed Central

    Herbst, Meghan K.; Rosenberg, Graeme; Daniels, Brock; Gross, Cary P.; Singh, Dinesh; Molinaro, Annette M.; Luty, Seth; Moore, Christopher L.

    2016-01-01

    Study objective Hydronephrosis is readily visible on ultrasonography and is a strong predictor of ureteral stones, but ultrasonography is a user-dependent technology and the test characteristics of clinician-performed ultrasonography for hydronephrosis are incompletely characterized, as is the effect of ultrasound fellowship training on predictive accuracy. We seek to determine the test characteristics of ultrasonography for detecting hydronephrosis when performed by clinicians with a wide range of experience under conditions of direct patient care. Methods This was a prospective study of patients presenting to an academic medical center emergency department with suspected renal colic. Before computed tomography (CT) results, an emergency clinician performed bedside ultrasonography, recording the presence and degree of hydronephrosis. CT data were abstracted from the dictated radiology report by an investigator blinded to the bedside ultrasonographic results. Test characteristics of bedside ultrasonography for hydronephrosis were calculated with the CT scan as the reference standard, with test characteristics compared by clinician experience stratified into 4 levels: attending physicians with emergency ultrasound fellowship training, attending physicians without emergency ultrasound fellowship training, ultrasound experienced non–attending physician clinicians (at least 2 weeks of ultrasound training), and ultrasound inexperienced non–attending physician clinicians (physician assistants, nurse practitioners, off-service rotators, and first-year emergency medicine residents with fewer than 2 weeks of ultrasound training). Results There were 670 interpretable bedside ultrasonographic tests performed by 144 unique clinicians, 80.9% of which were performed by clinicians directly involved in the care of the patient. On CT, 47.5% of all subjects had hydronephrosis and 47.0% had a ureteral stone. Among all clinicians, ultrasonography had a sensitivity of 72.6% (95% confidence interval [CI] 65.4% to 78.9%), specificity of 73.3% (95% CI 66.1% to 79.4%), positive likelihood ratio of 2.72 (95% CI 2.25 to 3.27), and negative likelihood ratio of 0.37 (95% CI 0.31 to 0.44) for hydronephrosis, using hydronephrosis on CT as the criterion standard. Among attending physicians with fellowship training, ultrasonography had sensitivity of 92.7% (95% CI 83.8% to 96.9%), positive likelihood ratio of 4.97 (95% CI 2.90 to 8.51), and negative likelihood ratio of 0.08 (95% CI 0.03 to 0.23). Conclusion Overall, ultrasonography performed by emergency clinicians was moderately sensitive and specific for detection of hydronephrosis as seen on CT in patients with suspected renal colic. However, presence or absence of hydronephrosis as determined by emergency physicians with fellowship training in ultrasonography yielded more definitive test results. For clinicians without fellowship training, there was no significant difference between groups in the predictive accuracy of the application according to experience level. PMID:24630203

  14. Display size effects in visual search: analyses of reaction time distributions as mixtures.

    PubMed

    Reynolds, Ann; Miller, Jeff

    2009-05-01

    In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.

  15. Transperineal ultrasound compared to evacuation proctography for diagnosing enteroceles and intussusceptions.

    PubMed

    Weemhoff, M; Kluivers, K B; Govaert, B; Evers, J L H; Kessels, A G H; Baeten, C G

    2013-03-01

    This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography.

  16. Is it possible to predict office hysteroscopy failure?

    PubMed

    Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola

    2014-10-01

    The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Estimating Function Approaches for Spatial Point Processes

    NASA Astrophysics Data System (ADS)

    Deng, Chong

    Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.

  18. Land use/land cover mapping (1:25000) of Taiwan, Republic of China by automated multispectral interpretation of LANDSAT imagery

    NASA Technical Reports Server (NTRS)

    Sung, Q. C.; Miller, L. D.

    1977-01-01

    Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.

  19. Muon identification with Muon Telescope Detector at the STAR experiment

    DOE PAGES

    Huang, T. C.; Ma, R.; Huang, B.; ...

    2016-07-15

    The Muon Telescope Detector (MTD) is a newly installed detector in the STAR experiment. It provides an excellent opportunity to study heavy quarkonium physics using the dimuon channel in heavy ion collisions. In this paper, we report the muon identification performance for the MTD using proton-proton collisions atmore » $$\\sqrt{s}$$ = 500 GeV with various methods. Here, the result using the Likelihood Ratio method shows that the muon identification efficiency can reach up to ~ 90% for muons with transverse momenta greater than 3 GeV/c and the significance of the J/ψ signal is improved by a factor of 2 compared to using the basic selection.« less

  20. Identifying Malignant Pleural Effusion by A Cancer Ratio (Serum LDH: Pleural Fluid ADA Ratio).

    PubMed

    Verma, Akash; Abisheganaden, John; Light, R W

    2016-02-01

    We studied the diagnostic potential of serum lactate dehydrogenase (LDH) in malignant pleural effusion. Retrospective analysis of patients hospitalized with exudative pleural effusion in 2013. Serum LDH and serum LDH: pleural fluid ADA ratio was significantly higher in cancer patients presenting with exudative pleural effusion. In multivariate logistic regression analysis, pleural fluid ADA was negatively correlated 0.62 (0.45-0.85, p = 0.003) with malignancy, whereas serum LDH 1.02 (1.0-1.03, p = 0.004) and serum LDH: pleural fluid ADA ratio 0.94 (0.99-1.0, p = 0.04) was correlated positively with malignant pleural effusion. For serum LDH: pleural fluid ADA ratio, a cut-off level of >20 showed sensitivity, specificity of 0.98 (95 % CI 0.92-0.99) and 0.94 (95 % CI 0.83-0.98), respectively. The positive likelihood ratio was 32.6 (95 % CI 10.7-99.6), while the negative likelihood ratio at this cut-off was 0.03 (95 % CI 0.01-0.15). Higher serum LDH and serum LDH: pleural fluid ADA ratio in patients presenting with exudative pleural effusion can distinguish between malignant and non-malignant effusion on the first day of hospitalization. The cut-off level for serum LDH: pleural fluid ADA ratio of >20 is highly predictive of malignancy in patients with exudative pleural effusion (whether lymphocytic or neutrophilic) with high sensitivity and specificity.

  1. New prior sampling methods for nested sampling - Development and testing

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie; Tuyl, Frank; Hudson, Irene

    2017-06-01

    Nested Sampling is a powerful algorithm for fitting models to data in the Bayesian setting, introduced by Skilling [1]. The nested sampling algorithm proceeds by carrying out a series of compressive steps, involving successively nested iso-likelihood boundaries, starting with the full prior distribution of the problem parameters. The "central problem" of nested sampling is to draw at each step a sample from the prior distribution whose likelihood is greater than the current likelihood threshold, i.e., a sample falling inside the current likelihood-restricted region. For both flat and informative priors this ultimately requires uniform sampling restricted to the likelihood-restricted region. We present two new methods of carrying out this sampling step, and illustrate their use with the lighthouse problem [2], a bivariate likelihood used by Gregory [3] and a trivariate Gaussian mixture likelihood. All the algorithm development and testing reported here has been done with Mathematica® [4].

  2. Application of random match probability calculations to mixed STR profiles.

    PubMed

    Bille, Todd; Bright, Jo-Anne; Buckleton, John

    2013-03-01

    Mixed DNA profiles are being encountered more frequently as laboratories analyze increasing amounts of touch evidence. If it is determined that an individual could be a possible contributor to the mixture, it is necessary to perform a statistical analysis to allow an assignment of weight to the evidence. Currently, the combined probability of inclusion (CPI) and the likelihood ratio (LR) are the most commonly used methods to perform the statistical analysis. A third method, random match probability (RMP), is available. This article compares the advantages and disadvantages of the CPI and LR methods to the RMP method. We demonstrate that although the LR method is still considered the most powerful of the binary methods, the RMP and LR methods make similar use of the observed data such as peak height, assumed number of contributors, and known contributors where the CPI calculation tends to waste information and be less informative. © 2013 American Academy of Forensic Sciences.

  3. Can We Rule Out Meningitis from Negative Jolt Accentuation? A Retrospective Cohort Study.

    PubMed

    Sato, Ryota; Kuriyama, Akira; Luthe, Sarah Kyuragi

    2017-04-01

    Jolt accentuation has been considered to be the most sensitive physical finding to predict meningitis. However, there are only a few studies assessing the diagnostic accuracy of jolt accentuation. Therefore, we aimed to evaluate the diagnostic accuracy of jolt accentuation and investigate whether it can be extended to patients with mild altered mental status. We performed a single center, retrospective observational study on patients who presented to the emergency department in a Japanese tertiary care center from January 1, 2010 to March 31, 2016. Jolt accentuation evaluated in patients with fever, headache, and mild altered mental status with Glasgow Coma Scale no lower than E2 or M4 was defined as "jolt accentuation in the broad sense." Jolt accentuation evaluated in patients with fever, headache, and no altered mental status was defined as "jolt accentuation in the narrow sense." We evaluated the sensitivity and specificity in both groups. Among 118 patients, the sensitivity and specificity of jolt accentuation in the broad sense were 70.7% (95% confidence interval (CI): 58.0%-80.8%) and 36.7% (95% CI: 25.6%-49.3%). The positive likelihood ratio and negative likelihood ratio were 1.12 (95% CI: 0.87-1.44) and 0.80 (95% CI: 0.48-1.34), respectively. Among 108 patients, the sensitivity and specificity of jot accentuation in the narrow sense were 75.0% (95% CI: 61.8%-84.8%) and 35.1% (95% CI: 24.0%-48.0%). The positive likelihood ratio and negative likelihood ratio were 1.16 (95% CI: 0.90-1.48) and 0.71 (95% CI: 0.40-1.28), respectively. Jolt accentuation itself has a limited value in the diagnosis of meningitis regardless of altered mental status. Therefore, meningitis should not be ruled out by negative jolt accentuation. © 2017 American Headache Society.

  4. Synthesizing Regression Results: A Factored Likelihood Method

    ERIC Educational Resources Information Center

    Wu, Meng-Jia; Becker, Betsy Jane

    2013-01-01

    Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…

  5. Gaussianization for fast and accurate inference from cosmological data

    NASA Astrophysics Data System (ADS)

    Schuhmann, Robert L.; Joachimi, Benjamin; Peiris, Hiranya V.

    2016-06-01

    We present a method to transform multivariate unimodal non-Gaussian posterior probability densities into approximately Gaussian ones via non-linear mappings, such as Box-Cox transformations and generalizations thereof. This permits an analytical reconstruction of the posterior from a point sample, like a Markov chain, and simplifies the subsequent joint analysis with other experiments. This way, a multivariate posterior density can be reported efficiently, by compressing the information contained in Markov Chain Monte Carlo samples. Further, the model evidence integral (I.e. the marginal likelihood) can be computed analytically. This method is analogous to the search for normal parameters in the cosmic microwave background, but is more general. The search for the optimally Gaussianizing transformation is performed computationally through a maximum-likelihood formalism; its quality can be judged by how well the credible regions of the posterior are reproduced. We demonstrate that our method outperforms kernel density estimates in this objective. Further, we select marginal posterior samples from Planck data with several distinct strongly non-Gaussian features, and verify the reproduction of the marginal contours. To demonstrate evidence computation, we Gaussianize the joint distribution of data from weak lensing and baryon acoustic oscillations, for different cosmological models, and find a preference for flat Λcold dark matter. Comparing to values computed with the Savage-Dickey density ratio, and Population Monte Carlo, we find good agreement of our method within the spread of the other two.

  6. Detection of scabies: A systematic review of diagnostic methods

    PubMed Central

    Leung, Victor; Miller, Mark

    2011-01-01

    BACKGROUND: Accurate diagnosis of scabies infection is important for patient treatment and for public health control of scabies epidemics. OBJECTIVE: To systematically review the accuracy and precision of history, physical examination and tests for diagnosing scabies. METHODS: Using a structured search strategy, Medline and Embase databases were searched for English and French language articles that included a diagnosis of scabies. Studies comparing history, physical examination and/or any diagnostic tests with the reference standard of microscopic visualization of mites, eggs or fecal elements obtained from skin scrapings or biopsies were included for analysis. Data were extracted using standard criteria. RESULTS: History and examination of pruritic dermatoses failed to accurately diagnose scabies infection. Dermatoscopy by a trained practitioner has a positive likelihood ratio of 6.5 (95% CI 4.1 to 10.3) and a negative likelihood ratio of 0.1 (95% CI 0.06 to 0.2) for diagnosing scabies. The accuracy of other diagnostic tests could not be calculated from the data in the literature. CONCLUSIONS: In the face of such diagnostic inaccuracy, clinical judgment is still practical in diagnosing scabies. Two tests are used – the burrow ink test and handheld dermatoscopy. The burrow ink test is a simple, rapid, noninvasive test that can be used to screen a large number of patients. Handheld dermatoscopy is an accurate test, but requires special equipment and trained practitioners. Given the morbidity and costs of scabies infection, and that studies to date lack adequate internal and external validity, research to identify or develop accurate diagnostic tests for scabies infection is needed and justifiable. PMID:23205026

  7. A discriminant function model as an alternative method to spirometry for COPD screening in primary care settings in China.

    PubMed

    Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan; Ran, Pixin

    2012-12-01

    COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. First we established a discriminant function model based on Bayes' Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. THE ESTABLISHED DISCRIMINANT FUNCTION MODEL INCLUDED NINE VARIABLES: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry.

  8. Diagnostic accuracy of droplet digital PCR for detection of EGFR T790M mutation in circulating tumor DNA

    PubMed Central

    Tong, Xiang; Wang, Ye; Wang, Chengdi; Jin, Jing; Tian, Panwen; Li, Weimin

    2018-01-01

    Objectives Although different methods have been established to detect epidermal growth factor receptor (EGFR) T790M mutation in circulating tumor DNA (ctDNA), a wide range of diagnostic accuracy values were reported in previous studies. The aim of this meta-analysis was to provide pooled diagnostic accuracy measures for droplet digital PCR (ddPCR) in the diagnosis of EGFR T790M mutation based on ctDNA. Materials and methods A systematic review and meta-analysis were carried out based on resources from Pubmed, Web of Science, Embase and Cochrane Library up to October 11, 2017. Data were extracted to assess the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio (NLR), diagnostic OR (DOR), and areas under the summary receiver-operating characteristic curve (SROC). Results Eleven of 311 studies identified have met the including criteria. The sensitivity and specificity of ddPCR for the detection of T790M mutation in ctDNA ranged from 0.0% to 100.0% and 63.2% to 100.0%, respectively. For the pooled analysis, ddPCR had a performance of 70.1% (95% CI, 62.7%–76.7%) sensitivity, 86.9 % (95% CI, 80.6%–91.7%) specificity, 3.67 (95% CI, 2.33–5.79) PLR, 0.41 (95% CI, 0.32–0.55) NLR, and 10.83 (95% CI, 5.86–20.03) DOR, with the area under the SROC curve being 0.82. Conclusion The ddPCR harbored a good performance for detection of EGFR T790M mutation in ctDNA. PMID:29844700

  9. Lifetime assessment by intermittent inspection under the mixture Weibull power law model with application to XLPE cables.

    PubMed

    Hirose, H

    1997-01-01

    This paper proposes a new treatment for electrical insulation degradation. Some types of insulation which have been used under various circumstances are considered to degrade at various rates in accordance with their stress circumstances. The cross-linked polyethylene (XLPE) insulated cables inspected by major Japanese electric companies clearly indicate such phenomena. By assuming that the inspected specimen is sampled from one of the clustered groups, a mixed degradation model can be constructed. Since the degradation of the insulation under common circumstances is considered to follow a Weibull distribution, a mixture model and a Weibull power law can be combined. This is called The mixture Weibull power law model. By using the maximum likelihood estimation for the newly proposed model to Japanese 22 and 33 kV insulation class cables, they are clustered into a certain number of groups by using the AIC and the generalized likelihood ratio test method. The reliability of the cables at specified years are assessed.

  10. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  11. Safety from Crime and Physical Activity among Older Adults: A Population-Based Study in Brazil

    PubMed Central

    Weber Corseuil, Maruí; Hallal, Pedro Curi; Xavier Corseuil, Herton; Jayce Ceola Schneider, Ione; d'Orsi, Eleonora

    2012-01-01

    Objective. To evaluate the association between safety from crime and physical activity among older adults. Methods. A population-based survey including 1,656 older adults (60+ years) took place in Florianopolis, Brazil, in 2009-2010. Commuting and leisure time physical activity were assessed through the long version of the International Physical Activity Questionnaire. Perception of safety from crime was assessed using the Neighbourhood Environment Walkability Scale. Results. Perceiving the neighbourhood as safe during the day was related to a 25% increased likelihood of being active in leisure time (95% CI 1.02–1.53); general perception of safety was also associated with a 25% increase in the likelihood of being active in leisure time (95% CI 1.01–1.54). Street lighting was related to higher levels of commuting physical activity (prevalence ratio: 1.89; 95% CI 1.28–2.80). Conclusions. Safety investments are essential for promoting physical activity among older adults in Brazil. PMID:22291723

  12. Using variable rate models to identify genes under selection in sequence pairs: their validity and limitations for EST sequences.

    PubMed

    Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H

    2007-02-01

    Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.

  13. Expansion of health insurance in Moldova and associated improvements in access and reductions in direct payments

    PubMed Central

    Hone, Thomas; Habicht, Jarno; Domente, Silviu; Atun, Rifat

    2016-01-01

    Background Moldova is the poorest country in Europe. Economic constraints mean that Moldova faces challenges in protecting individuals from excessive costs, improving population health and securing health system sustainability. The Moldovan government has introduced a state benefit package and expanded health insurance coverage to reduce the burden of health care costs for citizens. This study examines the effects of expanded health insurance by examining factors associated with health insurance coverage, likelihood of incurring out–of–pocket (OOP) payments for medicines or services, and the likelihood of forgoing health care when unwell. Methods Using publically available databases and the annual Moldova Household Budgetary Survey, we examine trends in health system financing, health care utilization, health insurance coverage, and costs incurred by individuals for the years 2006–2012. We perform logistic regression to assess the likelihood of having health insurance, incurring a cost for health care, and forgoing health care when ill, controlling for socio–economic and demographic covariates. Findings Private expenditure accounted for 55.5% of total health expenditures in 2012. 83.2% of private health expenditures is OOP payments–especially for medicines. Healthcare utilization is in line with EU averages of 6.93 outpatient visits per person. Being uninsured is associated with groups of those aged 25–49 years, the self–employed, unpaid family workers, and the unemployed, although we find lower likelihood of being uninsured for some of these groups over time. Over time, the likelihood of OOP for medicines increased (odds ratio OR = 1.422 in 2012 compared to 2006), but fell for health care services (OR = 0.873 in 2012 compared to 2006). No insurance and being older and male, was associated with increased likelihood of forgoing health care when sick, but we found the likelihood of forgoing health care to be increasing over time (OR = 1.295 in 2012 compared to 2009). Conclusions Moldova has achieved improvements in health insurance coverage with reductions in OOP for services, which are modest but are eroded by increasing likelihood of OOP for medicines. Insurance coverage was an important determinant for health care costs incurred by patients and patients forgoing health care. Improvements notwithstanding, there is an unfinished agenda of attaining universal health coverage in Moldova to protect individuals from health care costs. PMID:27909581

  14. Usefulness of the 6-minute walk test as a screening test for pulmonary arterial enlargement in COPD

    PubMed Central

    Oki, Yutaro; Kaneko, Masahiro; Fujimoto, Yukari; Sakai, Hideki; Misu, Shogo; Mitani, Yuji; Yamaguchi, Takumi; Yasuda, Hisafumi; Ishikawa, Akira

    2016-01-01

    Purpose Pulmonary hypertension and exercise-induced oxygen desaturation (EID) influence acute exacerbation of COPD. Computed tomography (CT)-detected pulmonary artery (PA) enlargement is independently associated with acute COPD exacerbations. Associations between PA to aorta (PA:A) ratio and EID in patients with COPD have not been reported. We hypothesized that the PA:A ratio correlated with EID and that results of the 6-minute walk test (6MWT) would be useful for predicting the risk associated with PA:A >1. Patients and methods We retrospectively measured lung function, 6MWT, emphysema area, and PA enlargement on CT in 64 patients with COPD. The patients were classified into groups with PA:A ≤1 and >1. Receiver-operating characteristic curves were used to determine the threshold values with the best cutoff points to predict patients with PA:A >1. Results The PA:A >1 group had lower forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), FEV1:FVC ratio, diffusion capacity of lung carbon monoxide, 6MW distance, and baseline peripheral oxygen saturation (SpO2), lowest SpO2, highest modified Borg scale results, percentage low-attenuation area, and history of acute COPD exacerbations ≤1 year, and worse BODE (Body mass index, airflow Obstruction, Dyspnea, and Exercise) index results (P<0.05). Predicted PA:A >1 was determined for SpO2 during 6MWT (best cutoff point 89%, area under the curve 0.94, 95% confidence interval 0.88–1). SpO2 <90% during 6MWT showed a sensitivity of 93.1, specificity of 94.3, positive predictive value of 93.1, negative predictive value of 94.3, positive likelihood ratio of 16.2, and negative likelihood ratio of 0.07. Conclusion Lowest SpO2 during 6MWT may predict CT-measured PA:A, and lowest SpO2 <89% during 6MWT is excellent for detecting pulmonary hypertension in COPD. PMID:27920514

  15. The diagnostic value of polymerase chain reaction for Mycobacterium tuberculosis to distinguish intestinal tuberculosis from crohn's disease: A meta-analysis.

    PubMed

    Jin, Ting; Fei, Baoying; Zhang, Yu; He, Xujun

    2017-01-01

    Intestinal tuberculosis (ITB) and Crohn's disease (CD) are important differential diagnoses that can be difficult to distinguish. Polymerase chain reaction (PCR) for Mycobacterium tuberculosis (MTB) is an efficient and promising tool. This meta-analysis was performed to systematically and objectively assess the potential diagnostic accuracy and clinical value of PCR for MTB in distinguishing ITB from CD. We searched PubMed, Embase, Web of Science, Science Direct, and the Cochrane Library for eligible studies, and nine articles with 12 groups of data were identified. The included studies were subjected to quality assessment using the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The summary estimates were as follows: sensitivity 0.47 (95% CI: 0.42-0.51); specificity 0.95 (95% CI: 0.93-0.97); the positive likelihood ratio (PLR) 10.68 (95% CI: 6.98-16.35); the negative likelihood ratio (NLR) 0.49 (95% CI: 0.33-0.71); and diagnostic odds ratio (DOR) 21.92 (95% CI: 13.17-36.48). The area under the curve (AUC) was 0.9311, with a Q* value of 0.8664. Heterogeneity was found in the NLR. The heterogeneity of the studies was evaluated by meta-regression analysis and subgroup analysis. The current evidence suggests that PCR for MTB is a promising and highly specific diagnostic method to distinguish ITB from CD. However, physicians should also keep in mind that negative results cannot exclude ITB for its low sensitivity. Additional prospective studies are needed to further evaluate the diagnostic accuracy of PCR.

  16. Quantitative Shear Wave Velocity Measurement on Acoustic Radiation Force Impulse Elastography for Differential Diagnosis between Benign and Malignant Thyroid Nodules: A Meta-analysis.

    PubMed

    Liu, Bo-Ji; Li, Dan-Dan; Xu, Hui-Xiong; Guo, Le-Hang; Zhang, Yi-Feng; Xu, Jun-Mei; Liu, Chang; Liu, Lin-Na; Li, Xiao-Long; Xu, Xiao-Hong; Qu, Shen; Xing, Mingzhao

    2015-12-01

    The aim of this study was to evaluate the diagnostic performance of quantitative shear wave velocity (SWV) measurement on acoustic radiation force impulse (ARFI) elastography for differentiation between benign and malignant thyroid nodules using meta-analysis. The databases of PubMed and the Web of Science were searched. Studies published in English on assessment of the sensitivity and specificity of ARFI elastography for the differentiation of thyroid nodules were collected. The quantitative measurement of ARFI elastography was evaluated by SWV (m/s). Meta-Disc Version 1.4 software was used to describe and calculate the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio and summary receiver operating characteristic curves. We analyzed a total of 13 studies, which included 1,854 thyroid nodules (including 1,339 benign nodules and 515 malignant nodules) from 1,641 patients. The summary sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules by SWV were 0.81 (95% confidence interval [CI]: 0.77-0.84) and 0.84 (95% CI: 0.81-0.86), respectively. The pooled positive and negative likelihood ratios were 5.21 (95% CI: 3.56-7.62) and 0.23 (95% CI: 0.17-0.32), respectively. The pooled diagnostic odds ratio was 27.53 (95% CI: 14.58-52.01), and the area under the summary receiver operating characteristic curve was 0.91 (Q* = 0.84). In conclusion, SWV measurement on ARFI elastography has high sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules and can be used in combination with conventional ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  17. Prediction of posttraumatic stress disorder symptomatology after childbirth - A Croatian longitudinal study.

    PubMed

    Srkalović Imširagić, Azijada; Begić, Dražen; Šimičević, Livija; Bajić, Žarko

    2017-02-01

    Following childbirth, a vast number of women experience some degree of mood swings, while some experience symptoms of postpartum posttraumatic stress disorder. Using a biopsychosocial model, the primary aim of this study was to identify predictors of posttraumatic stress disorder and its symptomatology following childbirth. This observational, longitudinal study included 372 postpartum women. In order to explore biopsychosocial predictors, participants completed several questionnaires 3-5 days after childbirth: the Impact of Events Scale Revised, the Big Five Inventory, The Edinburgh Postnatal Depression Scale, breastfeeding practice and social and demographic factors. Six to nine weeks after childbirth, participants re-completed the questionnaires regarding psychiatric symptomatology and breastfeeding practice. Using a multivariate level of analysis, the predictors that increased the likelihood of postpartum posttraumatic stress disorder symptomatology at the first study phase were: emergency caesarean section (odds ratio 2.48; confidence interval 1.13-5.43) and neuroticism personality trait (odds ratio 1.12; confidence interval 1.05-1.20). The predictor that increased the likelihood of posttraumatic stress disorder symptomatology at the second study phase was the baseline Impact of Events Scale Revised score (odds ratio 12.55; confidence interval 4.06-38.81). Predictors that decreased the likelihood of symptomatology at the second study phase were life in a nuclear family (odds ratio 0.27; confidence interval 0.09-0.77) and life in a city (odds ratio 0.29; confidence interval 0.09-0.94). Biopsychosocial theory is applicable to postpartum psychiatric disorders. In addition to screening for depression amongst postpartum women, there is a need to include other postpartum psychiatric symptomatology screenings in routine practice. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  18. Utility of shear wave elastography to detect papillary thyroid carcinoma in thyroid nodules: efficacy of the standard deviation elasticity.

    PubMed

    Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi

    2018-02-23

    The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.

  19. A comparison of cosegregation analysis methods for the clinical setting.

    PubMed

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  20. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  2. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  3. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology.

    PubMed

    Han, Jubong; Lee, K B; Lee, Jong-Man; Park, Tae Soon; Oh, J S; Oh, Pil-Jei

    2016-03-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. Copyright © 2015. Published by Elsevier Ltd.

  4. Bayes and the Law

    PubMed Central

    Fenton, Norman; Neil, Martin; Berger, Daniel

    2016-01-01

    Although the last forty years has seen considerable growth in the use of statistics in legal proceedings, it is primarily classical statistical methods rather than Bayesian methods that have been used. Yet the Bayesian approach avoids many of the problems of classical statistics and is also well suited to a broader range of problems. This paper reviews the potential and actual use of Bayes in the law and explains the main reasons for its lack of impact on legal practice. These include misconceptions by the legal community about Bayes’ theorem, over-reliance on the use of the likelihood ratio and the lack of adoption of modern computational methods. We argue that Bayesian Networks (BNs), which automatically produce the necessary Bayesian calculations, provide an opportunity to address most concerns about using Bayes in the law. PMID:27398389

  5. Bayes and the Law.

    PubMed

    Fenton, Norman; Neil, Martin; Berger, Daniel

    2016-06-01

    Although the last forty years has seen considerable growth in the use of statistics in legal proceedings, it is primarily classical statistical methods rather than Bayesian methods that have been used. Yet the Bayesian approach avoids many of the problems of classical statistics and is also well suited to a broader range of problems. This paper reviews the potential and actual use of Bayes in the law and explains the main reasons for its lack of impact on legal practice. These include misconceptions by the legal community about Bayes' theorem, over-reliance on the use of the likelihood ratio and the lack of adoption of modern computational methods. We argue that Bayesian Networks (BNs), which automatically produce the necessary Bayesian calculations, provide an opportunity to address most concerns about using Bayes in the law.

  6. Prevalence of Abuse Among Young Children with Rib Fractures: A Systematic Review

    PubMed Central

    Paine, Christine Weirich; Fakeye, Oludolapo; Christian, Cindy W.; Wood, Joanne N.

    2016-01-01

    Objectives We aimed to estimate the prevalence of abuse in young children presenting with rib fractures and to identify demographic, injury, and presentation-related characteristics that affect the probability that rib fractures are secondary to abuse. Methods We searched PubMed/MEDLINE and CINAHL databases for articles published in English between January 1, 1990 and June 30, 2014 on rib fracture etiology in children ≤ 5 years old. Two reviewers independently extracted predefined data elements and assigned quality ratings to included studies. Study-specific abuse prevalences and the sensitivities, specificities, and positive and negative likelihood ratios of patients’ demographic and clinical characteristics for abuse were calculated with 95% confidence intervals. Results Data for 1,396 children ≤ 48 months old with rib fractures were abstracted from 10 articles. Among infants < 12 months old, abuse prevalence ranged from 67% to 84%, whereas children 12-23 months old and 24-35 months old had study-specific abuse prevalences of 29% and 28% respectively. Age < 12 months was the only characteristic significantly associated with increased likelihood of abuse across multiple studies. Rib fracture location was not associated with likelihood of abuse. The retrospective design of the included studies and variations in ascertainment of cases, inclusion/exclusion criteria, and child abuse assessments prevented further meta-analysis. Conclusions Abuse is the most common cause of rib fractures in infants < 12 months old. Prospective studies with standardized methods are needed to improve accuracy in determining abuse prevalence among children with rib fractures and characteristics associated with abusive rib fractures. PMID:27749806

  7. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    PubMed

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  8. 76 FR 18221 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-01

    ... Ratio Standard for a State's Individual Market; Use: Under section 2718 of the Public Health Service Act... data allows for the calculation of an issuer's medical loss ratio (MLR) by market (individual, small... whether market destabilization has a high likelihood of occurring. Form Number: CMS-10361 (OMB Control No...

  9. Bivariate categorical data analysis using normal linear conditional multinomial probability model.

    PubMed

    Sun, Bingrui; Sutradhar, Brajendra

    2015-02-10

    Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.

  10. A new LDPC decoding scheme for PDM-8QAM BICM coherent optical communication system

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, Wen-bo; Xi, Li-xia; Tang, Xian-feng; Zhang, Xiao-guang

    2015-11-01

    A new log-likelihood ratio (LLR) message estimation method is proposed for polarization-division multiplexing eight quadrature amplitude modulation (PDM-8QAM) bit-interleaved coded modulation (BICM) optical communication system. The formulation of the posterior probability is theoretically analyzed, and the way to reduce the pre-decoding bit error rate ( BER) of the low density parity check (LDPC) decoder for PDM-8QAM constellations is presented. Simulation results show that it outperforms the traditional scheme, i.e., the new post-decoding BER is decreased down to 50% of that of the traditional post-decoding algorithm.

  11. Evaluating forensic DNA mixtures with contributors of different structured ethnic origins: a computer software.

    PubMed

    Hu, Yue-Qing; Fung, Wing K

    2003-08-01

    The effect of a structured population on the likelihood ratio of a DNA mixture has been studied by the current authors and others. In practice, contributors of a DNA mixture may belong to different ethnic/racial origins, a situation especially common in multi-racial countries such as the USA and Singapore. We have developed a computer software which is available on the web for evaluating DNA mixtures in multi-structured populations. The software can deal with various DNA mixture problems that cannot be handled by the methods given in a recent article of Fung and Hu.

  12. International interlaboratory study comparing single organism 16S rRNA gene sequencing data: Beyond consensus sequence comparisons

    PubMed Central

    Olson, Nathan D.; Lund, Steven P.; Zook, Justin M.; Rojas-Cornejo, Fabiola; Beck, Brian; Foy, Carole; Huggett, Jim; Whale, Alexandra S.; Sui, Zhiwei; Baoutina, Anna; Dobeson, Michael; Partis, Lina; Morrow, Jayne B.

    2015-01-01

    This study presents the results from an interlaboratory sequencing study for which we developed a novel high-resolution method for comparing data from different sequencing platforms for a multi-copy, paralogous gene. The combination of PCR amplification and 16S ribosomal RNA gene (16S rRNA) sequencing has revolutionized bacteriology by enabling rapid identification, frequently without the need for culture. To assess variability between laboratories in sequencing 16S rRNA, six laboratories sequenced the gene encoding the 16S rRNA from Escherichia coli O157:H7 strain EDL933 and Listeria monocytogenes serovar 4b strain NCTC11994. Participants performed sequencing methods and protocols available in their laboratories: Sanger sequencing, Roche 454 pyrosequencing®, or Ion Torrent PGM®. The sequencing data were evaluated on three levels: (1) identity of biologically conserved position, (2) ratio of 16S rRNA gene copies featuring identified variants, and (3) the collection of variant combinations in a set of 16S rRNA gene copies. The same set of biologically conserved positions was identified for each sequencing method. Analytical methods using Bayesian and maximum likelihood statistics were developed to estimate variant copy ratios, which describe the ratio of nucleotides at each identified biologically variable position, as well as the likely set of variant combinations present in 16S rRNA gene copies. Our results indicate that estimated variant copy ratios at biologically variable positions were only reproducible for high throughput sequencing methods. Furthermore, the likely variant combination set was only reproducible with increased sequencing depth and longer read lengths. We also demonstrate novel methods for evaluating variable positions when comparing multi-copy gene sequence data from multiple laboratories generated using multiple sequencing technologies. PMID:27077030

  13. Likelihood of home death associated with local rates of home birth: influence of local area healthcare preferences on site of death.

    PubMed

    Silveira, Maria J; Copeland, Laurel A; Feudtner, Chris

    2006-07-01

    We tested whether local cultural and social values regarding the use of health care are associated with the likelihood of home death, using variation in local rates of home births as a proxy for geographic variation in these values. For each of 351110 adult decedents in Washington state who died from 1989 through 1998, we calculated the home birth rate in each zip code during the year of death and then used multivariate regression modeling to estimate the relation between the likelihood of home death and the local rate of home births. Individuals residing in local areas with higher home birth rates had greater adjusted likelihood of dying at home (odds ratio [OR]=1.04 for each percentage point increase in home birth rate; 95% confidence interval [CI] = 1.03, 1.05). Moreover, the likelihood of dying at home increased with local wealth (OR=1.04 per $10000; 95% CI=1.02, 1.06) but decreased with local hospital bed availability (OR=0.96 per 1000 beds; 95% CI=0.95, 0.97). The likelihood of home death is associated with local rates of home births, suggesting the influence of health care use preferences.

  14. Endoscopic ultrasound-guided fine needle core biopsy for the diagnosis of pancreatic malignant lesions: a systematic review and Meta-Analysis

    PubMed Central

    Yang, Yongtao; Li, Lianyong; Qu, Changmin; Liang, Shuwen; Zeng, Bolun; Luo, Zhiwen

    2016-01-01

    Endoscopic ultrasound-guided fine needle core biopsy (EUS-FNB) has been used as an effective method of diagnosing pancreatic malignant lesions. It has the advantage of providing well preserved tissue for histologic grading and subsequent molecular biological analysis. In order to estimate the diagnostic accuracy of EUS-FNB for pancreatic malignant lesions, studies assessing EUS-FNB to diagnose solid pancreatic masses were selected via Medline. Sixteen articles published between 2005 and 2015, covering 828 patients, met the inclusion criteria. The summary estimates for EUS-FNB differentiating malignant from benign solid pancreatic masses were: sensitivity 0.84 (95% confidence interval (CI), 0.82–0.87); specificity 0.98 (95% CI, 0.93–1.00); positive likelihood ratio 8.0 (95% CI 4.5–14.4); negative likelihood ratio 0.17 (95% CI 0.10–0.26); and DOR 64 (95% CI 30.4–134.8). The area under the sROC curve was 0.96. Subgroup analysis did not identify other factors that could substantially affect the diagnostic accuracy, such as the study design, location of study, number of centers, location of lesion, whether or not a cytopathologist was present, and so on. EUS-FNB is a reliable diagnostic tool for solid pancreatic masses and should be especially considered for pathology where histologic morphology is preferred for diagnosis. PMID:26960914

  15. Comparison between presepsin and procalcitonin in early diagnosis of neonatal sepsis.

    PubMed

    Iskandar, Agustin; Arthamin, Maimun Z; Indriana, Kristin; Anshory, Muhammad; Hur, Mina; Di Somma, Salvatore

    2018-05-09

    Neonatal sepsis remains worldwide one of the leading causes of morbidity and mortality in both term and preterm infants. Lower mortality rates are related to timely diagnostic evaluation and prompt initiation of empiric antibiotic therapy. Blood culture, as gold standard examination for sepsis, has several limitations for early diagnosis, so that sepsis biomarkers could play an important role in this regard. This study was aimed to compare the value of the two biomarkers presepsin and procalcitonin in early diagnosis of neonatal sepsis. This was a prospective cross-sectional study performed, in Saiful Anwar General Hospital Malang, Indonesia, in 51 neonates that fulfill the criteria of systemic inflammatory response syndrome (SIRS) with blood culture as diagnostic gold standard for sepsis. At reviewer operating characteristic (ROC) curve analyses, using a presepsin cutoff of 706,5 pg/mL, the obtained area under the curve (AUCs) were: sensitivity = 85.7%, specificity = 68.8%, positive predictive value = 85.7%, negative predictive value = 68.8%, positive likelihood ratio = 2.75, negative likelihood ratio = 0.21, and accuracy = 80.4%. On the other hand, with a procalcitonin cutoff value of 161.33 pg/mL the obtained AUCs showed: sensitivity = 68.6%, specificity = 62.5%, positive predictive value = 80%, negative predictive value = 47.6%, positive likelihood ratio = 1.83, the odds ratio negative = 0.5, and accuracy = 66.7%. In early diagnosis of neonatal sepsis, compared with procalcitonin, presepsin seems to provide better early diagnostic value with consequent possible faster therapeutical decision making and possible positive impact on outcome of neonates.

  16. Sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios of presurgery clinical diagnosis of nonmelanoma skin cancers.

    PubMed

    Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç

    2011-03-01

    The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.

  17. Likelihood analysis of the chalcone synthase genes suggests the role of positive selection in morning glories (Ipomoea).

    PubMed

    Yang, Ji; Gu, Hongya; Yang, Ziheng

    2004-01-01

    Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.

  18. The Diagnostic Accuracy of Special Tests for Rotator Cuff Tear: The ROW Cohort Study

    PubMed Central

    Jain, Nitin B.; Luz, Jennifer; Higgins, Laurence D.; Dong, Yan; Warner, Jon J.P.; Matzkin, Elizabeth; Katz, Jeffrey N.

    2016-01-01

    Objective The aim was to assess diagnostic accuracy of 15 shoulder special tests for rotator cuff tears. Design From 02/2011 to 12/2012, 208 participants with shoulder pain were recruited in a cohort study. Results Among tests for supraspinatus tears, Jobe’s test had a sensitivity of 88% (95% CI=80% to 96%), specificity of 62% (95% CI=53% to 71%), and likelihood ratio of 2.30 (95% CI=1.79 to 2.95). The full can test had a sensitivity of 70% (95% CI=59% to 82%) and a specificity of 81% (95% CI=74% to 88%). Among tests for infraspinatus tears, external rotation lag signs at 0° had a specificity of 98% (95% CI=96% to 100%) and a likelihood ratio of 6.06 (95% CI=1.30 to 28.33), and the Hornblower’s sign had a specificity of 96% (95% CI=93% to 100%) and likelihood ratio of 4.81 (95% CI=1.60 to 14.49). Conclusions Jobe’s test and full can test had high sensitivity and specificity for supraspinatus tears and Hornblower’s sign performed well for infraspinatus tears. In general, special tests described for subscapularis tears have high specificity but low sensitivity. These data can be used in clinical practice to diagnose rotator cuff tears and may reduce the reliance on expensive imaging. PMID:27386812

  19. Statistical inference for template aging

    NASA Astrophysics Data System (ADS)

    Schuckers, Michael E.

    2006-04-01

    A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.

  20. Prospective evaluation of the ability of clinical scoring systems and physician-determined likelihood of appendicitis to obviate the need for CT.

    PubMed

    Golden, Sean K; Harringa, John B; Pickhardt, Perry J; Ebinger, Alexander; Svenson, James E; Zhao, Ying-Qi; Li, Zhanhai; Westergaard, Ryan P; Ehlenbach, William J; Repplinger, Michael D

    2016-07-01

    To determine whether clinical scoring systems or physician gestalt can obviate the need for computed tomography (CT) in patients with possible appendicitis. Prospective, observational study of patients with abdominal pain at an academic emergency department (ED) from February 2012 to February 2014. Patients over 11 years old who had a CT ordered for possible appendicitis were eligible. All parameters needed to calculate the scores were recorded on standardised forms prior to CT. Physicians also estimated the likelihood of appendicitis. Test characteristics were calculated using clinical follow-up as the reference standard. Receiver operating characteristic curves were drawn. Of the 287 patients (mean age (range), 31 (12-88) years; 60% women), the prevalence of appendicitis was 33%. The Alvarado score had a positive likelihood ratio (LR(+)) (95% CI) of 2.2 (1.7 to 3) and a negative likelihood ratio (LR(-)) of 0.6 (0.4 to 0.7). The modified Alvarado score (MAS) had LR(+) 2.4 (1.6 to 3.4) and LR(-) 0.7 (0.6 to 0.8). The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) score had LR(+) 1.3 (1.1 to 1.5) and LR(-) 0.5 (0.4 to 0.8). Physician-determined likelihood of appendicitis had LR(+) 1.3 (1.2 to 1.5) and LR(-) 0.3 (0.2 to 0.6). When combined with physician likelihoods, LR(+) and LR(-) was 3.67 and 0.48 (Alvarado), 2.33 and 0.45 (RIPASA), and 3.87 and 0.47 (MAS). The area under the curve was highest for physician-determined likelihood (0.72), but was not statistically significantly different from the clinical scores (RIPASA 0.67, Alvarado 0.72, MAS 0.7). Clinical scoring systems performed equally well as physician gestalt in predicting appendicitis. These scores do not obviate the need for imaging for possible appendicitis when a physician deems it necessary. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  2. WE-AB-BRA-05: Fully Automatic Segmentation of Male Pelvic Organs On CT Without Manual Intervention

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Y; Lian, J; Chen, R

    Purpose: We aim to develop a fully automatic tool for accurate contouring of major male pelvic organs in CT images for radiotherapy without any manual initialization, yet still achieving superior performance than the existing tools. Methods: A learning-based 3D deformable shape model was developed for automatic contouring. Specifically, we utilized a recent machine learning method, random forest, to jointly learn both image regressor and classifier for each organ. In particular, the image regressor is trained to predict the 3D displacement from each vertex of the 3D shape model towards the organ boundary based on the local image appearance around themore » location of this vertex. The predicted 3D displacements are then used to drive the 3D shape model towards the target organ. Once the shape model is deformed close to the target organ, it is further refined by an organ likelihood map estimated by the learned classifier. As the organ likelihood map provides good guideline for the organ boundary, the precise contouring Result could be achieved, by deforming the 3D shape model locally to fit boundaries in the organ likelihood map. Results: We applied our method to 29 previously-treated prostate cancer patients, each with one planning CT scan. Compared with manually delineated pelvic organs, our method obtains overlap ratios of 85.2%±3.74% for the prostate, 94.9%±1.62% for the bladder, and 84.7%±1.97% for the rectum, respectively. Conclusion: This work demonstrated feasibility of a novel machine-learning based approach for accurate and automatic contouring of major male pelvic organs. It shows the potential to replace the time-consuming and inconsistent manual contouring in the clinic. Also, compared with the existing works, our method is more accurate and also efficient since it does not require any manual intervention, such as manual landmark placement. Moreover, our method obtained very similar contouring results as the clinical experts. Project is partially support by a grant from NCI 1R01CA140413.« less

  3. The effect of lossy image compression on image classification

    NASA Technical Reports Server (NTRS)

    Paola, Justin D.; Schowengerdt, Robert A.

    1995-01-01

    We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.

  4. Approximated maximum likelihood estimation in multifractal random walks

    NASA Astrophysics Data System (ADS)

    Løvsletten, O.; Rypdal, M.

    2012-04-01

    We present an approximated maximum likelihood method for the multifractal random walk processes of [E. Bacry , Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.64.026103 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the r computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices.

  5. What influences the choice of assessment methods in health technology assessments? Statistical analysis of international health technology assessments from 1989 to 2002.

    PubMed

    Draborg, Eva; Andersen, Christian Kronborg

    2006-01-01

    Health technology assessment (HTA) has been used as input in decision making worldwide for more than 25 years. However, no uniform definition of HTA or agreement on assessment methods exists, leaving open the question of what influences the choice of assessment methods in HTAs. The objective of this study is to analyze statistically a possible relationship between methods of assessment used in practical HTAs, type of assessed technology, type of assessors, and year of publication. A sample of 433 HTAs published by eleven leading institutions or agencies in nine countries was reviewed and analyzed by multiple logistic regression. The study shows that outsourcing of HTA reports to external partners is associated with a higher likelihood of using assessment methods, such as meta-analysis, surveys, economic evaluations, and randomized controlled trials; and with a lower likelihood of using assessment methods, such as literature reviews and "other methods". The year of publication was statistically related to the inclusion of economic evaluations and shows a decreasing likelihood during the year span. The type of assessed technology was related to economic evaluations with a decreasing likelihood, to surveys, and to "other methods" with a decreasing likelihood when pharmaceuticals were the assessed type of technology. During the period from 1989 to 2002, no major developments in assessment methods used in practical HTAs were shown statistically in a sample of 433 HTAs worldwide. Outsourcing to external assessors has a statistically significant influence on choice of assessment methods.

  6. Clinical accuracy of tympanic thermometer and noncontact infrared skin thermometer in pediatric practice: an alternative for axillary digital thermometer.

    PubMed

    Apa, Hurşit; Gözmen, Salih; Bayram, Nuri; Çatkoğlu, Asl; Devrim, Fatma; Karaarslan, Utku; Günay, İlker; Ünal, Nurettin; Devrim, İlker

    2013-09-01

    The aim of this study was to compare the body temperature measurements of infrared tympanic and forehead noncontact thermometers with the axillary digital thermometer. Randomly selected 50 pediatric patients who were hospitalized in Dr Behcet Uz Children's Training and Research Hospital, Pediatric Infectious Disease Unit, between March 2012 and September 2012 were included in the study. Body temperature measurements were performed using an axillary thermometer (Microlife MT 3001), a tympanic thermometer (Microlife Ear Thermometer IR 100), and a noncontact thermometer (ThermoFlash LX-26). Fifty patients participated in this study. We performed 1639 temperature readings for every method. The average difference between the mean (SD) of both axillary and tympanic temperatures was -0.20°C (0.61°C) (95% confidence interval, -1.41°C to 1.00°C). The average difference between the mean (SD) of both axillary and forehead temperatures was -0.38 (0.55°C) (95% confidence interval, -1.47°C to 0.70°C). The Bland-Altman plot showed that most of the data points were tightly clustered around the zero line of the difference between the 2 temperature readings. With the use of the axillary method as the criterion standard, positive likelihood ratios were 17.9 and 16.5 and negative likelihood ratios were 0.2 and 0.4 for tympanic and forehead measurements, respectively. The results demonstrated that the infrared tympanic thermometer could be a good option in the measurement of fever in the pediatric population. The noncontact infrared thermometer is very useful for the screening of fever in the pediatric population, but it must be used with caution because it has a high value of bias.

  7. A Comparison of the Cheater Detection and the Unrelated Question Models: A Randomized Response Survey on Physical and Cognitive Doping in Recreational Triathletes

    PubMed Central

    Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles

    2016-01-01

    Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830

  8. MTN-017: A Rectal Phase 2 Extended Safety and Acceptability Study of Tenofovir Reduced-Glycerin 1% Gel

    PubMed Central

    Lama, Javier R.; Richardson, Barbra A.; Carballo-Diéguez, Alex; Kunjara Na Ayudhya, Ratiya Pamela; Liu, Karen; Patterson, Karen B.; Leu, Cheng-Shiun; Galaska, Beth; Jacobson, Cindy E.; Parikh, Urvi M.; Marzinke, Mark A.; Hendrix, Craig W.; Johnson, Sherri; Piper, Jeanna M.; Grossman, Cynthia; Ho, Ken S.; Lucas, Jonathan; Pickett, Jim; Bekker, Linda-Gail; Chariyalertsak, Suwat; Chitwarakorn, Anupong; Gonzales, Pedro; Holtz, Timothy H.; Liu, Albert Y.; Mayer, Kenneth H.; Zorrilla, Carmen; Schwartz, Jill L.; Rooney, James; McGowan, Ian

    2017-01-01

    Abstract Background. Human immunodeficiency virus (HIV) disproportionately affects men who have sex with men (MSM) and transgender women (TGW). Safe and acceptable topical HIV prevention methods that target the rectum are needed. Methods. MTN-017 was a phase 2, 3-period, randomized sequence, open-label, expanded safety and acceptability crossover study comparing rectally applied reduced-glycerin (RG) 1% tenofovir (TFV) and oral emtricitabine/TFV disoproxil fumarate (FTC/TDF). In each 8-week study period participants were randomized to RG-TFV rectal gel daily, or RG-TFV rectal gel before and after receptive anal intercourse (RAI; or at least twice weekly in the event of no RAI), or daily oral FTC/TDF. Results. MSM and TGW (n = 195) were enrolled from 8 sites in the United States, Thailand, Peru, and South Africa with mean age of 31.1 years (range 18-64). There were no differences in ≥grade 2 adverse event rates between daily gel (incidence rate ratio [IRR], 1.09; P = .59) or RAI gel (IRR, 0.90; P = .51) compared to FTC/TDF. High adherence (≥80% of prescribed doses assessed by unused product return and Short Message System reports) was less likely in the daily gel regimen (odds ratio [OR], 0.35; P < .001), and participants reported less likelihood of future daily gel use for HIV protection compared to FTC/TDF (OR, 0.38; P < .001). Conclusions. Rectal application of RG TFV gel was safe in MSM and TGW. Adherence and product use likelihood were similar for the intermittent gel and daily oral FTC/TDF regimens, but lower for the daily gel regimen. Clinical Trials Registration: NCT01687218. PMID:27986684

  9. Stochastic multicomponent reactive transport analysis of low quality drainage release from waste rock piles: Controls of the spatial distribution of acid generating and neutralizing minerals

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Mayer, K. Ulrich; Beckie, Roger D.

    2017-06-01

    In mining environmental applications, it is important to assess water quality from waste rock piles (WRPs) and estimate the likelihood of acid rock drainage (ARD) over time. The mineralogical heterogeneity of WRPs is a source of uncertainty in this assessment, undermining the reliability of traditional bulk indicators used in the industry. We focused in this work on the bulk neutralizing potential ratio (NPR), which is defined as the ratio of the content of non-acid-generating minerals (typically reactive carbonates such as calcite) to the content of potentially acid-generating minerals (typically sulfides such as pyrite). We used a streamtube-based Monte-Carlo method to show why and to what extent bulk NPR can be a poor indicator of ARD occurrence. We simulated ensembles of WRPs identical in their geometry and bulk NPR, which only differed in their initial distribution of the acid generating and acid neutralizing minerals that control NPR. All models simulated the same principal acid-producing, acid-neutralizing and secondary mineral forming processes. We show that small differences in the distribution of local NPR values or the number of flow paths that generate acidity strongly influence drainage pH. The results indicate that the likelihood of ARD (epitomized by the probability of occurrence of pH< 4 in a mixing boundary) within the first 100 years can be as high as 75% for a NPR = 2 and 40% for NPR = 4. The latter is traditionally considered as a ;universally safe; threshold to ensure non-acidic waters in practical applications. Our results suggest that new methods that explicitly account for mineralogical heterogeneity must be sought when computing effective (upscaled) NPR values at the scale of the piles.

  10. A likelihood ratio model for the determination of the geographical origin of olive oil.

    PubMed

    Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz

    2015-01-01

    Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Molecular imaging with (99m)Tc-MIBI and molecular testing for mutations in differentiating benign from malignant follicular neoplasm: a prospective comparison.

    PubMed

    Giovanella, L; Campenni, A; Treglia, G; Verburg, F A; Trimboli, P; Ceriani, L; Bongiovanni, M

    2016-06-01

    To compare mutation analysis of cytology specimens and (99m)Tc-MIBI thyroid scintigraphy for differentiating benign from malignant thyroid nodules in patients with a cytological reading of follicular neoplasm. Patients ≥18 years of age with a solitary hypofunctioning thyroid nodule (≥10 mm), normal thyrotropin and calcitonin levels, and a cytological diagnosis of follicular neoplasm were prospectively enrolled. Mutation analysis and (99m)Tc-MIBI scintigraphy were performed and patients were subsequently operated on to confirm or exclude a malignant lesion. Mutations for KRAS, HRAS and NRAS and for BRAF and translocations of PAX8/PPARγ, RET/PTC1 and RET/PTC3 were investigated. Static thyroid scintigraphic images were acquired 10 and 60 min after intravenous injection of 200 MBq of (99m)Tc-MIBI and visually assessed. Additionally, the MIBI washout index was calculated using a semiquantitative method. In our series, 26 % of nodules with a follicular pattern on cytology were malignant with a prevalence of follicular carcinomas. (99m)Tc-MIBI scintigraphy was found to be significantly more accurate (positive likelihood ratio 4.56 for visual assessment and 12.35 for semiquantitative assessment) than mutation analysis (positive likelihood ratio 1.74). A negative (99m)Tc-MIBI scan reliably excluded malignancy. In patients with a thyroid nodule cytologically diagnosed as a follicular proliferation, semiquantitative analysis of (99m)Tc-MIBI scintigraphy should be the preferred method for differentiating benign from malignant nodules. It is superior to molecular testing for the presence of differentiated thyroid cancer-associated mutations in fine-needle aspiration cytology sample material.

  12. The second Herschel-ATLAS Data Release - III. Optical and near-infrared counterparts in the North Galactic Plane field

    NASA Astrophysics Data System (ADS)

    Furlanetto, C.; Dye, S.; Bourne, N.; Maddox, S.; Dunne, L.; Eales, S.; Valiante, E.; Smith, M. W.; Smith, D. J. B.; Ivison, R. J.; Ibar, E.

    2018-05-01

    This paper forms part of the second major public data release of the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS). In this work, we describe the identification of optical and near-infrared counterparts to the submillimetre detected sources in the 177 deg2 North Galactic Plane (NGP) field. We used the likelihood ratio method to identify counterparts in the Sloan Digital Sky Survey and in the United Kingdom InfraRed Telescope Imaging Deep Sky Survey within a search radius of 10 arcsec of the H-ATLAS sources with a 4σ detection at 250 μm. We obtained reliable (R ≥ 0.8) optical counterparts with r < 22.4 for 42 429 H-ATLAS sources (37.8 per cent), with an estimated completeness of 71.7 per cent and a false identification rate of 4.7 per cent. We also identified counterparts in the near-infrared using deeper K-band data which covers a smaller ˜25 deg2. We found reliable near-infrared counterparts to 61.8 per cent of the 250-μm-selected sources within that area. We assessed the performance of the likelihood ratio method to identify optical and near-infrared counterparts taking into account the depth and area of both input catalogues. Using catalogues with the same surface density of objects in the overlapping ˜25 deg2 area, we obtained that the reliable fraction in the near-infrared (54.8 per cent) is significantly higher than in the optical (36.4 per cent). Finally, using deep radio data which covers a small region of the NGP field, we found that 80-90 per cent of our reliable identifications are correct.

  13. Likelihood inference for COM-Poisson cure rate model with interval-censored data and Weibull lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, N

    2017-10-01

    In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.

  14. Effects of mass media campaign exposure intensity and durability on quit attempts in a population-based cohort study

    PubMed Central

    Wakefield, M. A.; Spittal, M. J.; Yong, H-H.; Durkin, S. J.; Borland, R.

    2011-01-01

    Objective: To assess the extent to which intensity and timing of televised anti-smoking advertising emphasizing the serious harms of smoking influences quit attempts. Methods: Using advertising gross rating points (GRPs), we estimated exposure to tobacco control and nicotine replacement therapy (NRT) advertising in the 3, 4–6, 7–9 and 10–12 months prior to follow-up of a replenished cohort of 3037 Australian smokers during 2002–08. Using generalized estimating equations, we related the intensity and timing of advertising exposure from each source to the likelihood of making a quit attempt in the 3 months prior to follow-up. Results: Tobacco control advertising in the 3-month period prior to follow-up, but not in more distant past periods, was related to a higher likelihood of making a quit attempt. Each 1000 GRP increase per quarter was associated with an 11% increase in making a quit attempt [odds ratio (OR) = 1.11, 95% confidence interval (CI) 1.03–1.19, P = 0.009)]. NRT advertising was unrelated to quit attempts. Conclusions: Tobacco control advertising emphasizing the serious harms of smoking is associated with short-term increases in the likelihood of smokers making a quit attempt. Repeated cycles of higher intensity tobacco control media campaigns are needed to sustain high levels of quit attempts. PMID:21730252

  15. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Composite Partial Likelihood Estimation Under Length-Biased Sampling, With Application to a Prevalent Cohort Study of Dementia

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing

    2013-01-01

    The Canadian Study of Health and Aging (CSHA) employed a prevalent cohort design to study survival after onset of dementia, where patients with dementia were sampled and the onset time of dementia was determined retrospectively. The prevalent cohort sampling scheme favors individuals who survive longer. Thus, the observed survival times are subject to length bias. In recent years, there has been a rising interest in developing estimation procedures for prevalent cohort survival data that not only account for length bias but also actually exploit the incidence distribution of the disease to improve efficiency. This article considers semiparametric estimation of the Cox model for the time from dementia onset to death under a stationarity assumption with respect to the disease incidence. Under the stationarity condition, the semiparametric maximum likelihood estimation is expected to be fully efficient yet difficult to perform for statistical practitioners, as the likelihood depends on the baseline hazard function in a complicated way. Moreover, the asymptotic properties of the semiparametric maximum likelihood estimator are not well-studied. Motivated by the composite likelihood method (Besag 1974), we develop a composite partial likelihood method that retains the simplicity of the popular partial likelihood estimator and can be easily performed using standard statistical software. When applied to the CSHA data, the proposed method estimates a significant difference in survival between the vascular dementia group and the possible Alzheimer’s disease group, while the partial likelihood method for left-truncated and right-censored data yields a greater standard error and a 95% confidence interval covering 0, thus highlighting the practical value of employing a more efficient methodology. To check the assumption of stable disease for the CSHA data, we also present new graphical and numerical tests in the article. The R code used to obtain the maximum composite partial likelihood estimator for the CSHA data is available in the online Supplementary Material, posted on the journal web site. PMID:24000265

  17. On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.

    PubMed

    Karabatsos, George

    2018-06-01

    This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.

  18. Maximum-likelihood estimation of parameterized wavefronts from multifocal data

    PubMed Central

    Sakamoto, Julia A.; Barrett, Harrison H.

    2012-01-01

    A method for determining the pupil phase distribution of an optical system is demonstrated. Coefficients in a wavefront expansion were estimated using likelihood methods, where the data consisted of multiple irradiance patterns near focus. Proof-of-principle results were obtained in both simulation and experiment. Large-aberration wavefronts were handled in the numerical study. Experimentally, we discuss the handling of nuisance parameters. Fisher information matrices, Cramér-Rao bounds, and likelihood surfaces are examined. ML estimates were obtained by simulated annealing to deal with numerous local extrema in the likelihood function. Rapid processing techniques were employed to reduce the computational time. PMID:22772282

  19. External Validation of Fatty Liver Index for Identifying Ultrasonographic Fatty Liver in a Large-Scale Cross-Sectional Study in Taiwan

    PubMed Central

    Fang, Kuan-Chieh; Wang, Yuan-Chen; Huo, Teh-Ia; Huang, Yi-Hsiang; Yang, Hwai-I; Su, Chien-Wei; Lin, Han-Chieh; Lee, Fa-Yauh; Wu, Jaw-Ching; Lee, Shou-Dong

    2015-01-01

    Background and Aims The fatty liver index (FLI) is an algorithm involving the waist circumference, body mass index, and serum levels of triglyceride and gamma-glutamyl transferase to identify fatty liver. Although some studies have attempted to validate the FLI, few studies have been conducted for external validation among Asians. We attempted to validate FLI to predict ultrasonographic fatty liver in Taiwanese subjects. Methods We enrolled consecutive subjects who received health check-up services at the Taipei Veterans General Hospital from 2002 to 2009. Ultrasonography was applied to diagnose fatty liver. The ability of the FLI to detect ultrasonographic fatty liver was assessed by analyzing the area under the receiver operating characteristic (AUROC) curve. Results Among the 29,797 subjects enrolled in this study, fatty liver was diagnosed in 44.5% of the population. Subjects with ultrasonographic fatty liver had a significantly higher FLI than those without fatty liver by multivariate analysis (odds ratio 1.045; 95% confidence interval, CI 1.044–1.047, p< 0.001). Moreover, FLI had the best discriminative ability to identify patients with ultrasonographic fatty liver (AUROC: 0.827, 95% confidence interval, 0.822–0.831). An FLI < 25 (negative likelihood ratio (LR−) 0.32) for males and <10 (LR− 0.26) for females rule out ultrasonographic fatty liver. Moreover, an FLI ≥ 35 (positive likelihood ratio (LR+) 3.12) for males and ≥ 20 (LR+ 4.43) for females rule in ultrasonographic fatty liver. Conclusions FLI could accurately identify ultrasonographic fatty liver in a large-scale population in Taiwan but with lower cut-off value than the Western population. Meanwhile the cut-off value was lower in females than in males. PMID:25781622

  20. Utility and Safety of Endoscopic Ultrasound With Bronchoscope-Guided Fine-Needle Aspiration in Mediastinal Lymph Node Sampling: Systematic Review and Meta-Analysis.

    PubMed

    Dhooria, Sahajal; Aggarwal, Ashutosh N; Gupta, Dheeraj; Behera, Digambar; Agarwal, Ritesh

    2015-07-01

    The use of endoscopic ultrasound with bronchoscope-guided fine-needle aspiration (EUS-B-FNA) has been described in the evaluation of mediastinal lymphadenopathy. Herein, we conduct a meta-analysis to estimate the overall diagnostic yield and safety of EUS-B-FNA combined with endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), in the diagnosis of mediastinal lymphadenopathy. The PubMed and EmBase databases were searched for studies reporting the outcomes of EUS-B-FNA in diagnosis of mediastinal lymphadenopathy. The study quality was assessed using the QualSyst tool. The yield of EBUS-TBNA alone and the combined procedure (EBUS-TBNA and EUS-B-FNA) were analyzed by calculating the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each study, and pooling the study results using a random effects model. Heterogeneity and publication bias were assessed for individual outcomes. The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was calculated using proportion meta-analysis. Our search yielded 10 studies (1,080 subjects with mediastinal lymphadenopathy). The sensitivity of the combined procedure was significantly higher than EBUS-TBNA alone (91% vs 80%, P = .004), in staging of lung cancer (4 studies, 465 subjects). The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was 7.6% in the diagnosis of mediastinal adenopathy. No serious complication of EUS-B-FNA procedure was reported. Clinical and statistical heterogeneity was present without any evidence of publication bias. Combining EBUS-TBNA and EUS-B-FNA is an effective and safe method, superior to EBUS-TBNA alone, in the diagnosis of mediastinal lymphadenopathy. Good quality randomized controlled trials are required to confirm the results of this systematic review. Copyright © 2015 by Daedalus Enterprises.

  1. Diagnostic accuracy of PCR for detecting ALK gene rearrangement in NSCLC patients: A systematic review and meta-analysis

    PubMed Central

    Zhang, Xia; Zhou, Jian-Guo; Wu, Hua-Lian; Ma, Hu; Jiang, Zhi-Xia

    2017-01-01

    Background Anaplastic lymphoma kinase (ALK) gene fusion has been reported in 3∼5% non-small cell lung carcinoma (NSCLC) patients, and polymerase chain reaction (PCR) is commonly used to detecting the gene status, but the diagnostic capacity of it is still controversial. A systematic review and meta-analysis was conducted to clarify the diagnostic accuracy of PCR for detecting ALK gene rearrangement in NSCLC patients. Results 18 articles were enrolled, which included 21 studies, involving 2800 samples from NSCLC patients. The overall pooled parameters were calculated: sensitivity was 92.4% [95% confidence interval (CI): 82.2%–97.0%], specificity was 97.8% [95% CI: 95.1%–99.0%], PLR was 41.51 [95% CI: 18.10–95.22], NLR was 0.08 [95% CI: 0.03–0.19], DOR was 535.72 [95% CI: 128.48–2233.79], AUROC was 0.99 [95% CI: 0.98–1.00]. Materials and Methods Relevant articles were searched from PubMed, EMBASE, Web of Science, Cochrane library, American Society of Clinical Oncology (ASCO), European Society for Medical Oncology (ESMO), China National Knowledge Infrastructure (CNKI), China Wan Fang databases and Chinese biomedical literature database (CBM). Diagnostic capacity of PCR test was assessed by the pooled sensitivity and specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), area under the summary receiver operating characteristic (AUROC). Conclusions Based on the results from this review, PCR has good diagnostic performance for detecting the ALK gene fusion in NSCLC patients. Moreover, due to the poor methodology quality of the enrolled trials, more well-designed multi-center trials should be performed. PMID:29088875

  2. Diagnostic accuracy of clinical examination features for identifying large rotator cuff tears in primary health care

    PubMed Central

    Cadogan, Angela; McNair, Peter; Laslett, Mark; Hing, Wayne; Taylor, Stephen

    2013-01-01

    Objectives: Rotator cuff tears are a common and disabling complaint. The early diagnosis of medium and large size rotator cuff tears can enhance the prognosis of the patient. The aim of this study was to identify clinical features with the strongest ability to accurately predict the presence of a medium, large or multitendon (MLM) rotator cuff tear in a primary care cohort. Methods: Participants were consecutively recruited from primary health care practices (n = 203). All participants underwent a standardized history and physical examination, followed by a standardized X-ray series and diagnostic ultrasound scan. Clinical features associated with the presence of a MLM rotator cuff tear were identified (P<0.200), a logistic multiple regression model was derived for identifying a MLM rotator cuff tear and thereafter diagnostic accuracy was calculated. Results: A MLM rotator cuff tear was identified in 24 participants (11.8%). Constant pain and a painful arc in abduction were the strongest predictors of a MLM tear (adjusted odds ratio 3.04 and 13.97 respectively). Combinations of ten history and physical examination variables demonstrated highest levels of sensitivity when five or fewer were positive [100%, 95% confidence interval (CI): 0.86–1.00; negative likelihood ratio: 0.00, 95% CI: 0.00–0.28], and highest specificity when eight or more were positive (0.91, 95% CI: 0.86–0.95; positive likelihood ratio 4.66, 95% CI: 2.34–8.74). Discussion: Combinations of patient history and physical examination findings were able to accurately detect the presence of a MLM rotator cuff tear. These findings may aid the primary care clinician in more efficient and accurate identification of rotator cuff tears that may require further investigation or orthopedic consultation. PMID:24421626

  3. Diagnostic Accuracy of Computer Tomography Angiography and Magnetic Resonance Angiography in the Stenosis Detection of Autologuous Hemodialysis Access: A Meta-Analysis

    PubMed Central

    Liu, Shiyuan

    2013-01-01

    Purpose To compare the diagnostic performances of computer tomography angiography (CTA) and magnetic resonance angiography (MRA) for detection and assessment of stenosis in patients with autologuous hemodialysis access. Materials and Methods Search of PubMed, MEDLINE, EMBASE and Cochrane Library database from January 1984 to May 2013 for studies comparing CTA or MRA with DSA or surgery for autologuous hemodialysis access. Eligible studies were in English language, aimed to detect more than 50% stenosis or occlusion of autologuous vascular access in hemodialysis patients with CTA and MRA technology and provided sufficient data about diagnosis performance. Methodological quality was assessed by the Quality Assessment of Diagnostic Studies (QUADAS) instrument. Sensitivities (SEN), specificities (SPE), positive likelihood ratio (PLR), negative likelihood values (NLR), diagnostic odds ratio (DOR) and areas under the receiver operator characteristic curve (AUC) were pooled statistically. Potential threshold effect, heterogeneity and publication bias was evaluated. The clinical utility of CTA and MRA in detection of stenosis was also investigated. Result Sixteen eligible studies were included, with a total of 500 patients. Both CTA and MRA were accurate modality (sensitivity, 96.2% and 95.4%, respectively; specificity, 97.1 and 96.1%, respectively; DOR [diagnostic odds ratio], 393.69 and 211.47, respectively) for hemodialysis vascular access. No significant difference was detected between the diagnostic performance of CTA (AUC, 0.988) and MRA (AUC, 0.982). Meta-regression analyses and subgroup analyses revealed no statistical difference. The Deek’s funnel plots suggested a publication bias. Conclusion Diagnostic performance of CTA and MRA for detecting stenosis of hemodialysis vascular access had no statistical difference. Both techniques may function as an alternative or an important complement to conventional digital subtraction angiography (DSA) and may be able to help guide medical management. PMID:24194928

  4. Estimation method for serial dilution experiments.

    PubMed

    Ben-David, Avishai; Davidson, Charles E

    2014-12-01

    Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.

  5. Epidemiologic programs for computers and calculators. A microcomputer program for multiple logistic regression by unconditional and conditional maximum likelihood methods.

    PubMed

    Campos-Filho, N; Franco, E L

    1989-02-01

    A frequent procedure in matched case-control studies is to report results from the multivariate unmatched analyses if they do not differ substantially from the ones obtained after conditioning on the matching variables. Although conceptually simple, this rule requires that an extensive series of logistic regression models be evaluated by both the conditional and unconditional maximum likelihood methods. Most computer programs for logistic regression employ only one maximum likelihood method, which requires that the analyses be performed in separate steps. This paper describes a Pascal microcomputer (IBM PC) program that performs multiple logistic regression by both maximum likelihood estimation methods, which obviates the need for switching between programs to obtain relative risk estimates from both matched and unmatched analyses. The program calculates most standard statistics and allows factoring of categorical or continuous variables by two distinct methods of contrast. A built-in, descriptive statistics option allows the user to inspect the distribution of cases and controls across categories of any given variable.

  6. CT Pulmonary Angiography: Increasingly Diagnosing Less Severe Pulmonary Emboli

    PubMed Central

    Schissler, Andrew J.; Rozenshtein, Anna; Kulon, Michal E.; Pearson, Gregory D. N.; Green, Robert A.; Stetson, Peter D.; Brenner, David J.; D'Souza, Belinda; Tsai, Wei-Yann; Schluger, Neil W.; Einstein, Andrew J.

    2013-01-01

    Background It is unknown whether the observed increase in computed tomography pulmonary angiography (CTPA) utilization has resulted in increased detection of pulmonary emboli (PEs) with a less severe disease spectrum. Methods Trends in utilization, diagnostic yield, and disease severity were evaluated for 4,048 consecutive initial CTPAs performed in adult patients in the emergency department of a large urban academic medical center between 1/1/2004 and 10/31/2009. Transthoracic echocardiography (TTE) findings and peak serum troponin levels were evaluated to assess for the presence of PE-associated right ventricular (RV) abnormalities (dysfunction or dilatation) and myocardial injury, respectively. Statistical analyses were performed using multivariate logistic regression. Results 268 CTPAs (6.6%) were positive for acute PE, and 3,780 (93.4%) demonstrated either no PE or chronic PE. There was a significant increase in the likelihood of undergoing CTPA per year during the study period (odds ratio [OR] 1.05, 95% confidence interval [CI] 1.04–1.07, P<0.01). There was no significant change in the likelihood of having a CTPA diagnostic of an acute PE per year (OR 1.03, 95% CI 0.95–1.11, P = 0.49). The likelihood of diagnosing a less severe PE on CTPA with no associated RV abnormalities or myocardial injury increased per year during the study period (OR 1.39, 95% CI 1.10–1.75, P = 0.01). Conclusions CTPA utilization has risen with no corresponding change in diagnostic yield, resulting in an increase in PE detection. There is a concurrent rise in the likelihood of diagnosing a less clinically severe spectrum of PEs. PMID:23776522

  7. [Injury to the Scapholunate Ligament in Distal Radius Fractures: Peri-Operative Diagnosis and Treatment Results].

    PubMed

    Gajdoš, R; Pilný, J; Pokorná, A

    2016-01-01

    PURPOSE OF THE STUDY Injury to the scapholunate ligament is frequently associated with a fracture of the distal radius. At present neither a unified concept of treatment nor a standard method of diagnosis in these concomitant injuries is available. The aim of the study was to evaluate a group of surgically treated patients with distal radius fractures in order to assess a contribution of combined conventional X-ray and intra-operative fluoroscopic examinations to the diagnosis of associated lesions and to compare short-term functional outcomes of sugically treated patients with those of patients treated conservatively. MATERIAL AND METHODS A group of patients undergoiong surgery for distal radius fractures using plate osteosynthesis was evaluated retrospectively. The peri-operative diagnosis of associated injury to the scapholunate ligament was based on pre-operative standard X-ray views and intra-operative fluoroscopy. The latter consisted of images of maximum radial and ulnar deviation as well as an image of the forearm in traction exerted manually along the long axis. All views were in postero-anterior projection. Results were read directly on the monitor of a fluoroscopic device after its calibration or were obtained by comparing the thickness of an attached Kirschner wire with the distance to be measured. Subsequently, pixels were converted to millimetres. When a scapholunate ligament injury was found and confirmed by examination of the contralateral wrist, the finding was verified by open reduction or arthroscopy. Both static and dynamic instabilities were treated together with the distal radius fracture at one-stage surgery. After surgery, the patients without ligament injury had the wrist immobilised for 4 weeks, then rehabilitation followed. In the patients with a damaged ligament, immobilisation in a short brace lasted until transarticular wires were removed. All patients were followed up for a year at least. At follow-up, the injured wrist was examined for signs of clinical instability of the scapholunate joint, functional outcome was assessed using the Mayo Wrist Score (MWS) and pain intensity was evaluated on the Visual Analoque Scale (VAS). Restriction in daily activities was rated by the Quick Disabilities of the Arm, Shoulder and Hand (QDASH) score and plain X-ray was done. If any of the results was not satisfactory, MRI examination was indicated. RESULTS Of a total of 265 patients, 35 had injury to the scapholunate joint, 16 had static instability diagnosed by a standard fluoroscopic examination and nine patients with an acute phase of injury remained undiagnosed. For detection of associated scapholunate injuries, a standard X-ray examination had sensitivity of 46%, specificity of 99%, accuracy of 92%, positive predictive value of 84%, negative predictive value of 92%, positive likelihood ratio = 35.05 and negative likelihood ratio = 0.55. Dynamic fluoroscopic examination showed sensitivity of 53%, specificity of 99%, accuracy of 95%, positive predictive value of 77%, negative predictive value of 96%, positive likelihood ratio = 36.49 and negative likelihood ratio = 0.48. Using the MWS system, no differences in the outcome of scapholunate instability treatment were found between the patients undergoing surgery and those treated conservatively (p=0.35). Statistically significant differences were detected in the evaluation of subjective parameters - both VAS and QDASH scores were better in the treated than non-treated patients (p=0.02 and p=0.04, respectively). DISCUSSION The high negative predictive values of both standard X-ray and intra-operative fluoroscopy showed that combined use of the two method is more relevant for excluding than for confirming an injury to the scapholunate ligament concomitant with distal radius fracture. Similarly, the low negative likelihood ratio showed that a negative result decreases the pre-test probability of concomitant injury. CONCLUSIONS Negative findings of scapholunate ligament injury on standard X-ray views and intra-operative fluoroscopic images make it unnecessary to perform any further intra-operative examination to detect injury to the scapholunate ligament. Positive findings require verification of the degree of injury by another intra-operative modality, most ideally by arthroscopy. Patients with untreated instability associated with distal radius fracture have, at short-term follow-up, no statistically significant differences in functioning of the injured extremity in comparison with treated patients. Subjectively, however, they feel more pain and more restriction in performing daily activities. Therefore, the treatment of an injured scapholunate ligament together with distal radius fracture at one-stage surgery seems to be a good alternative for the patient. Key words: distal radius fractures, scapholunate ligament, radiographic, diagnosis, outcome distal radius fracture.

  8. Pooled diagnostic accuracy of resting distal to aortic coronary pressure referenced to fractional flow reserve: The importance of resting coronary physiology.

    PubMed

    Maini, Rohit; Moscona, John; Sidhu, Gursukhman; Katigbak, Paul; Fernandez, Camilo; Irimpen, Anand; Mogabgab, Owen; Ward, Charisse; Samson, Rohan; LeJemtel, Thierry

    2018-04-29

    Both resting and hyperemic physiologic methods to guide coronary revascularization improve cardiovascular outcomes compared with angiographic guidance alone. Fractional flow reserve (FFR) remains underutilized due to concerns regarding hyperemia, prompting study of resting distal to aortic coronary pressure (Pd/Pa). Pd/Pa is a vasodilator-free resting index unlike FFR. While Pd/Pa is similar to another resting index, instantaneous wave-free ratio (iFR), it is a whole-cycle measurement not limited to the wave-free diastolic period. Pd/Pa is not validated clinically although multiple accuracy studies have been performed. Our meta-analysis examines the overall diagnostic accuracy of Pd/Pa referenced to FFR, the accepted invasive standard of ischemia. We searched PubMed, EMBASE, Central, ProQuest, and Web of Science databases for full text articles published through August 9, 2017 addressing the diagnostic accuracy of Pd/Pa referenced to FFR < 0.80. The following keywords were used: "distal coronary artery pressure" OR "Pd/Pa" AND "fractional flow reserve" OR "FFR." In total, 14 studies comprising 7004 lesions were identified. Pooled diagnostic accuracy estimates of Pd/Pa versus FFR < 0.80 were: sensitivity, 0.77 (95% CI, 0.75-0.78); specificity, 0.82 (0.81-0.83); positive likelihood ratio, 4.7 (3.3-6.6); negative likelihood ratio, 0.29 (0.24-0.34); diagnostic odds ratio, 18.1 (14.4-22.6); area under the summary receiver-operating characteristic curve of 0.88; and diagnostic accuracy of 0.80 (0.76-0.83). Pd/Pa shows adequate agreement with FFR as a resting index of coronary stenosis severity without the undesired effects and cost of hyperemic agents. Pd/Pa has the potential to guide coronary revascularization with easier application and availability compared with iFR and FFR. © 2018, Wiley Periodicals, Inc.

  9. Detecting spatial structures in throughfall data: the effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-04-01

    In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.

  10. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    NASA Astrophysics Data System (ADS)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.

  11. Effects of body fat and dominant somatotype on explosive strength and aerobic capacity trainability in prepubescent children.

    PubMed

    Marta, Carlos C; Marinho, Daniel A; Barbosa, Tiago M; Carneiro, André L; Izquierdo, Mikel; Marques, Mário C

    2013-12-01

    The purpose of this study was to analyze the influence of body fat and somatotype on explosive strength and aerobic capacity trainability in the prepubertal growth spurt, marked by rapid changes in body size, shape, and composition, all of which are sexually dimorphic. One hundred twenty-five healthy children (58 boys, 67 girls), aged 10-11 years (10.8 ± 0.4 years), who were self-assessed in Tanner stages 1-2, were randomly assigned into 2 experimental groups to train twice a week for 8 weeks: strength training group (19 boys, 22 girls), endurance training group (21 boys, 24 girls), and a control group (18 boys, 21 girls). Evaluation of body fat was carried out using the method described by Slaughter. Somatotype was computed according to the Heath-Carter method. Increased endomorphy reduced the likelihood of vertical jump height improvement (odds ratio [OR], 0.10; 95% confidence interval [CI], 0.01-0.85), increased mesomorphy (OR, 6.15; 95% CI, 1.52-24.88) and ectomorphy (OR, 6.52; 95% CI, 1.71-24.91) increased the likelihood of sprint performance, and increased ectomorphy (OR, 3.84; 95% CI, 1.20-12.27) increased the likelihood of aerobic fitness gains. Sex did not affect the training-induced changes in strength or aerobic fitness. These data suggest that somatotype has an effect on explosive strength and aerobic capacity trainability, which should not be disregarded. The effect of adiposity on explosive strength, musculoskeletal magnitude on running speed, and relative linearity on running speed and aerobic capacity seem to be crucial factors related to training-induced gains in prepubescent boys and girls.

  12. SEPARABLE FACTOR ANALYSIS WITH APPLICATIONS TO MORTALITY DATA

    PubMed Central

    Fosdick, Bailey K.; Hoff, Peter D.

    2014-01-01

    Human mortality data sets can be expressed as multiway data arrays, the dimensions of which correspond to categories by which mortality rates are reported, such as age, sex, country and year. Regression models for such data typically assume an independent error distribution or an error model that allows for dependence along at most one or two dimensions of the data array. However, failing to account for other dependencies can lead to inefficient estimates of regression parameters, inaccurate standard errors and poor predictions. An alternative to assuming independent errors is to allow for dependence along each dimension of the array using a separable covariance model. However, the number of parameters in this model increases rapidly with the dimensions of the array and, for many arrays, maximum likelihood estimates of the covariance parameters do not exist. In this paper, we propose a submodel of the separable covariance model that estimates the covariance matrix for each dimension as having factor analytic structure. This model can be viewed as an extension of factor analysis to array-valued data, as it uses a factor model to estimate the covariance along each dimension of the array. We discuss properties of this model as they relate to ordinary factor analysis, describe maximum likelihood and Bayesian estimation methods, and provide a likelihood ratio testing procedure for selecting the factor model ranks. We apply this methodology to the analysis of data from the Human Mortality Database, and show in a cross-validation experiment how it outperforms simpler methods. Additionally, we use this model to impute mortality rates for countries that have no mortality data for several years. Unlike other approaches, our methodology is able to estimate similarities between the mortality rates of countries, time periods and sexes, and use this information to assist with the imputations. PMID:25489353

  13. Clinical Diagnosis of Bordetella Pertussis Infection: A Systematic Review.

    PubMed

    Ebell, Mark H; Marchello, Christian; Callahan, Maria

    2017-01-01

    Bordetella pertussis (BP) is a common cause of prolonged cough. Our objective was to perform an updated systematic review of the clinical diagnosis of BP without restriction by patient age. We identified prospective cohort studies of patients with cough or suspected pertussis and assessed study quality using QUADAS-2. We performed bivariate meta-analysis to calculate summary estimates of accuracy and created summary receiver operating characteristic curves to explore heterogeneity by vaccination status and age. Of 381 studies initially identified, 22 met our inclusion criteria, of which 14 had a low risk of bias. The overall clinical impression was the most accurate predictor of BP (positive likelihood ratio [LR+], 3.3; negative likelihood ratio [LR-], 0.63). The presence of whooping cough (LR+, 2.1) and posttussive vomiting (LR+, 1.7) somewhat increased the likelihood of BP, whereas the absence of paroxysmal cough (LR-, 0.58) and the absence of sputum (LR-, 0.63) decreased it. Whooping cough and posttussive vomiting have lower sensitivity in adults. Clinical criteria defined by the Centers for Disease Control and Prevention were sensitive (0.90) but nonspecific. Typical signs and symptoms of BP may be more sensitive but less specific in vaccinated patients. The clinician's overall impression was the most accurate way to determine the likelihood of BP infection when a patient initially presented. Clinical decision rules that combine signs, symptoms, and point-of-care tests have not yet been developed or validated. © Copyright 2017 by the American Board of Family Medicine.

  14. [Clinical examination and the Valsalva maneuver in heart failure].

    PubMed

    Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A

    2018-01-01

    Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.

  15. Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.

    PubMed

    Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng

    2017-04-17

    Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and added with Gaussian distributed noise. Meanwhile clinical breast ultrasound images are used to visually evaluate the effectiveness of the method. To examine the performance, comparison tests between the proposed RSBF and six state-of-the-art methods for ultrasound speckle removal are performed on simulated ultrasound images with various noise and speckle levels. The results of the proposed RSBF are satisfying since the Gaussian noise and the Rayleigh speckle are greatly suppressed. The proposed method can improve the SNRs of the enhanced images to nearly 15 and 13 dB compared with images corrupted by speckle as well as images contaminated by speckle and noise under various SNR levels, respectively. The RSBF is effective in enhancing edge while smoothing the speckle and noise in clinical ultrasound images. In the comparison experiments, the proposed method demonstrates its superiority in accuracy and robustness for denoising and edge preserving under various levels of noise and speckle in terms of visual quality as well as numeric metrics, such as peak signal to noise ratio, SNR and root mean squared error. The experimental results show that the proposed method is effective for removing the speckle and the background noise in ultrasound images. The main reason is that it performs a "detect and replace" two-step mechanism. The advantages of the proposed RBSF lie in two aspects. Firstly, each central pixel is classified as noise, speckle or noise-free texture according to the absolute difference between the target pixel and the reference median. Subsequently, the Rayleigh-maximum-likelihood filter and the bilateral filter are switched to eliminate speckle and noise, respectively, while the noise-free pixels are unaltered. Therefore, it is implemented with better accuracy and robustness than the traditional methods. Generally, these traits declare that the proposed RSBF would have significant clinical application.

  16. Source localization of rhythmic ictal EEG activity: a study of diagnostic accuracy following STARD criteria.

    PubMed

    Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders

    2013-10-01

    Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.

  17. Pepsin in saliva for the diagnosis of gastro-oesophageal reflux disease.

    PubMed

    Hayat, Jamal O; Gabieta-Somnez, Shirley; Yazaki, Etsuro; Kang, Jin-Yong; Woodcock, Andrew; Dettmar, Peter; Mabary, Jerry; Knowles, Charles H; Sifrim, Daniel

    2015-03-01

    Current diagnostic methods for gastro-oesophageal reflux disease (GORD) have moderate sensitivity/specificity and can be invasive and expensive. Pepsin detection in saliva has been proposed as an 'office-based' method for GORD diagnosis. The aims of this study were to establish normal values of salivary pepsin in healthy asymptomatic subjects and to determine its value to discriminate patients with reflux-related symptoms (GORD, hypersensitive oesophagus (HO)) from functional heartburn (FH). 100 asymptomatic controls and 111 patients with heartburn underwent MII-pH monitoring and simultaneous salivary pepsin determination on waking, after lunch and dinner. Cut-off value for pepsin positivity was 16 ng/mL. Patients were divided into GORD (increased acid exposure time (AET), n=58); HO (normal AET and + Symptom Association Probability (SAP), n=26) and FH (normal AET and-SAP, n=27). 1/3 of asymptomatic subjects had pepsin in saliva at low concentration (0(0-59)ng/mL). Patients with GORD and HO had higher prevalence and pepsin concentration than controls (HO, 237(52-311)ng/mL and GORD, 121(29-252)ng/mL)(p<0.05). Patients with FH had low prevalence and concentration of pepsin in saliva (0(0-40) ng/mL). A positive test had 78.6% sensitivity and 64.9% specificity for diagnosis of GORD+HO (likelihood ratio: 2.23). However, one positive sample with >210 ng/mL pepsin suggested presence of GORD+HO with 98.2% specificity (likelihood ratio: 25.1). Only 18/84 (21.4%) of GORD+HO patients had 3 negative samples. In patients with symptoms suggestive of GORD, salivary pepsin testing may complement questionnaires to assist office-based diagnosis. This may lessen the use of unnecessary antireflux therapy and the need for further invasive and expensive diagnostic methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Numerical Demons in Monte Carlo Estimation of Bayesian Model Evidence with Application to Soil Respiration Models

    NASA Astrophysics Data System (ADS)

    Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.

    2016-12-01

    Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.

  19. Comparison of robustness to outliers between robust poisson models and log-binomial models when estimating relative risks for common binary outcomes: a simulation study.

    PubMed

    Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P

    2014-06-26

    To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.

  20. Bayesian model comparison and parameter inference in systems biology using nested sampling.

    PubMed

    Pullen, Nick; Morris, Richard J

    2014-01-01

    Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling's nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multi-dimensional integral to a 1D integration over likelihood space. This approach focuses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverse-engineer a system's behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.

  1. Transfer Entropy as a Log-Likelihood Ratio

    NASA Astrophysics Data System (ADS)

    Barnett, Lionel; Bossomaier, Terry

    2012-09-01

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  2. Transfer entropy as a log-likelihood ratio.

    PubMed

    Barnett, Lionel; Bossomaier, Terry

    2012-09-28

    Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.

  3. Screening for depression with a brief questionnaire in a primary care setting: validation of the two questions with help question (Malay version).

    PubMed

    Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D

    2011-01-01

    To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.

  4. Recognition of depressive symptoms by physicians.

    PubMed

    Henriques, Sergio Gonçalves; Fráguas, Renério; Iosifescu, Dan V; Menezes, Paulo Rossi; Lucia, Mara Cristina Souza de; Gattaz, Wagner Farid; Martins, Milton Arruda

    2009-01-01

    To investigate the recognition of depressive symptoms of major depressive disorder (MDD) by general practitioners. MDD is underdiagnosed in medical settings, possibly because of difficulties in the recognition of specific depressive symptoms. A cross-sectional study of 316 outpatients at their first visit to a teaching general hospital. We evaluated the performance of 19 general practitioners using Primary Care Evaluation of Mental Disorders (PRIME-MD) to detect depressive symptoms and compared them to 11 psychiatrists using Structured Clinical Interview Axis I Disorders, Patient Version (SCID I/P). We measured likelihood ratios, sensitivity, specificity, and false positive and false negative frequencies. The lowest positive likelihood ratios were for psychomotor agitation/retardation (1.6) and fatigue (1.7), mostly because of a high rate of false positive results. The highest positive likelihood ratio was found for thoughts of suicide (8.5). The lowest sensitivity, 61.8%, was found for impaired concentration. The sensitivity for worthlessness or guilt in patients with medical illness was 67.2% (95% CI, 57.4-76.9%), which is significantly lower than that found in patients without medical illness, 91.3% (95% CI, 83.2-99.4%). Less adequately identified depressive symptoms were both psychological and somatic in nature. The presence of a medical illness may decrease the sensitivity of recognizing specific depressive symptoms. Programs for training physicians in the use of diagnostic tools should consider their performance in recognizing specific depressive symptoms. Such procedures could allow for the development of specific training to aid in the detection of the most misrecognized depressive symptoms.

  5. Excellent AUC for joint fluid cytology in the detection/exclusion of hip and knee prosthetic joint infection.

    PubMed

    Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana

    2017-09-01

    The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.

  6. Diagnostic accuracy of history and physical examination in bacterial acute rhinosinusitis.

    PubMed

    Autio, Timo J; Koskenkorva, Timo; Närkiö, Mervi; Leino, Tuomo K; Koivunen, Petri; Alho, Olli-Pekka

    2015-07-01

    To evaluate the diagnostic accuracy of symptoms, the symptom progression pattern, and clinical signs in identifying bacterial acute rhinosinusitis (ARS). We conducted an inception cohort study among 50 military recruits with ARS. We collected symptoms daily from the onset of symptoms to approximately 10 days. At 9 to 10 days, standardized data on symptoms and physical findings were gathered. A positive culture of maxillary sinus aspirate was considered to be the reference standard for bacterial ARS. At 9 to 10 days, the presence or deterioration after 5 days of any of the symptoms could not be used to diagnose bacterial ARS. Toothache had an adequate positive likelihood ratio (positive likelihood ratio [LR+] 4.4) but was too rare to be used for screening. In contrast, several physical findings at 9 to 10 days were of more diagnostic use and frequent enough for screening. Moderate or profuse (vs. none/minimal) amount of secretion in nasal passage seen in anterior rhinoscopy satisfactorily either ruled in, if present (LR+ 3.2), or ruled out, if absent (negative likelihood ratio 0.2), bacterial ARS. If any secretion was seen in the posterior pharynx or middle meatus, the probability of bacterial ARS increased markedly (LR+ 5.3 and LR+ 11.0, respectively). We found symptoms or their change to be of little use in identifying bacterial ARS. In contrast, we observed several clinical findings after 9 to 10 days of symptoms to predict bacterial ARS quite accurately. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Causation or selection - examining the relation between education and overweight/obesity in prospective observational studies: a meta-analysis.

    PubMed

    Kim, T J; Roesler, N M; von dem Knesebeck, O

    2017-06-01

    Numerous studies have investigated the association between education and overweight/obesity. Yet less is known about the relative importance of causation (i.e. the influence of education on risks of overweight/obesity) and selection (i.e. the influence of overweight/obesity on the likelihood to attain education) hypotheses. A systematic review was performed to assess the linkage between education and overweight/obesity in prospective studies in general populations. Studies were searched within five databases, and study quality was appraised with the Newcastle-Ottawa scale. In total, 31 studies were considered for meta-analysis. Regarding causation (24 studies), the lower educated had a higher likelihood (odds ratio: 1.33, 1.21-1.47) and greater risk (risk ratio: 1.34, 1.08-1.66) for overweight/obesity, when compared with the higher educated. However, these associations were no longer statistically significant when accounting for publication bias. Concerning selection (seven studies), overweight/obese individuals had a greater likelihood of lower education (odds ratio: 1.57, 1.10-2.25), when contrasted with the non-overweight or non-obese. Subgroup analyses were performed by stratifying meta-analyses upon different factors. Relationships between education and overweight/obesity were affected by study region, age groups, gender and observation period. In conclusion, it is necessary to consider both causation and selection processes in order to tackle educational inequalities in obesity appropriately. © 2017 World Obesity Federation.

  8. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  9. Meta-Analysis and Systematic Review to Assess the Role of Soluble FMS-Like Tyrosine Kinase-1 and Placenta Growth Factor Ratio in Prediction of Preeclampsia: The SaPPPhirE Study.

    PubMed

    Agrawal, Swati; Cerdeira, Ana Sofia; Redman, Christopher; Vatish, Manu

    2018-02-01

    Preeclampsia is a major cause of morbidity and mortality worldwide. Numerous candidate biomarkers have been proposed for diagnosis and prediction of preeclampsia. Measurement of maternal circulating angiogenesis biomarker as the ratio of sFlt-1 (soluble FMS-like tyrosine kinase-1; an antiangiogenic factor)/PlGF (placental growth factor; an angiogenic factor) reflects the antiangiogenic balance that characterizes incipient or overt preeclampsia. The ratio increases before the onset of the disease and thus may help in predicting preeclampsia. We conducted a meta-analysis to explore the predictive accuracy of sFlt-1/PlGF ratio in preeclampsia. We included 15 studies with 534 cases with preeclampsia and 19 587 controls. The ratio has a pooled sensitivity of 80% (95% confidence interval, 0.68-0.88), specificity of 92% (95% confidence interval, 0.87-0.96), positive likelihood ratio of 10.5 (95% confidence interval, 6.2-18.0), and a negative likelihood ratio of 0.22 (95% confidence interval, 0.13-0.35) in predicting preeclampsia in both high- and low-risk patients. Most of the studies have not made a distinction between early- and late-onset disease, and therefore, the analysis for it could not be done. It can prove to be a valuable screening tool for preeclampsia and may also help in decision-making, treatment stratification, and better resource allocation. © 2017 American Heart Association, Inc.

  10. Determining an anthropometric surrogate measure for identifying low birth weight babies in Uganda: a hospital-based cross sectional study

    PubMed Central

    2013-01-01

    Background Achieving Millennium Development Goal 4 is dependent on significantly reducing neonatal mortality. Low birth weight is an underlying factor in most neonatal deaths. In developing countries the missed opportunity for providing life saving care is mainly a result of failure to identify low birth weight newborns. This study aimed at identifying a reliable anthropometric measurement for screening low birth weight and determining an operational cut-off point in the Uganda setting. This simple measurement is required because of lack of weighing scales in the community, and sometimes in the health facilities. Methods This was a hospital-based cross-sectional study. Two midwives weighed 706 newborns and measured their foot length, head, chest, thigh and mid-upper arm circumferences within 24 hours after birth. Data was analysed using STATA version 10.0. Correlation with birth weight using Pearson’s correlation coefficient and Receiver Operating Characteristics curve analysis were done to determine the measure that best predicts birth weight. Sensitivity and specificity were calculated for a range of measures to obtain operational cut-off points; and Likelihood Ratios and Diagnostic Odds Ratio were determined for each cut-off point. Results Birth weights ranged from 1370–5350 grams with a mean of 3050 grams (SD 0.53) and 85 (12%) babies weighed less than 2500 grams. All anthropometric measurements had a positive correlation with birth weight, with foot length showing the strongest (r = 0.76) and thigh circumference the weakest (r = 0.62) correlations. Foot length had the highest predictive value for low birth weight (AUC = 0.97) followed by mid-upper arm circumference (AUC = 0.94). Foot length and chest circumference had the highest sensitivity (94%) and specificity (90%) respectively for screening low birth weight babies at the selected cut-off points. Chest circumference had a significantly higher positive likelihood ratio (8.7) than any other measure, and foot length had the lowest negative likelihood ratio. Chest circumference and foot length had diagnostic odds ratios of 97% and 77% respectively. Foot length was easier to measure and it involved minimal exposure of the baby to cold. A cut-off of foot length 7.9 cm had sensitivity of 94% and specificity of 83% for predicting low birth weight. Conclusions This study suggests foot length as the most appropriate predictor for low birth weight in comparison to chest, head, mid-upper arm and thigh circumference in the Uganda setting. Use of low cost and easy to use tools to identify low birth weight babies by village health teams could support community efforts to save newborns. PMID:23587297

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  12. Intervals for posttest probabilities: a comparison of 5 methods.

    PubMed

    Mossman, D; Berger, J O

    2001-01-01

    Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.

  13. SCI Identification (SCIDNT) program user's guide. [maximum likelihood method for linear rotorcraft models

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program Linear SCIDNT which evaluates rotorcraft stability and control coefficients from flight or wind tunnel test data is described. It implements the maximum likelihood method to maximize the likelihood function of the parameters based on measured input/output time histories. Linear SCIDNT may be applied to systems modeled by linear constant-coefficient differential equations. This restriction in scope allows the application of several analytical results which simplify the computation and improve its efficiency over the general nonlinear case.

  14. Risk Factors for Invasive Candidiasis in Infants >1500 g Birth Weight

    PubMed Central

    Lee, Jan Hau; Hornik, Christoph P.; Benjamin, Daniel K.; Herring, Amy H.; Clark, Reese H.; Cohen-Wolkowiez, Michael; Smith, P. Brian

    2012-01-01

    Background We describe the incidence, risk factors, and outcomes of invasive candidiasis in infants >1500 g birth weight. Methods We conducted a retrospective cohort study of infants >1500 g birth weight discharged from 305 NICUs in the Pediatrix Medical Group from 2001–2010. Using multivariable logistic regression, we identified risk factors for invasive candidiasis. Results Invasive candidiasis occurred in 330/530,162 (0.06%) infants. These were documented from positive cultures from ≥1 of these sources: blood (n=323), cerebrospinal fluid (n=6), or urine from catheterization (n=19). Risk factors included day of life >7 (OR 25.2; 95% CI 14.6–43.3), vaginal birth (OR 1.6 [1.2–2.3]), exposure to broad-spectrum antibiotics (OR 1.6 [1.1–2.4]), central venous line (OR 1.8 [1.3–2.6]), and platelet count <50,000/mm3 (OR 3.7 [2.1–6.7]). All risk factors had poor sensitivities, low positive likelihood ratios, and low positive predictive values. The combination of broad-spectrum antibiotics and low platelet count had the highest positive likelihood ratio (46.2), but the sensitivity of this combination was only 4%. Infants with invasive candidiasis had increased mortality (OR 2.2 [1.3–3.6]). Conclusions Invasive candidiasis is uncommon in infants >1500 g birth weight. Infants at greatest risk are those exposed to broad-spectrum antibiotics and with platelet counts of <50,000/mm3. PMID:23042050

  15. Ultrasonography guidance reduces complications and costs associated with thoracentesis procedures.

    PubMed

    Patel, Pankaj A; Ernst, Frank R; Gunnarsson, Candace L

    2012-01-01

    PURPOSE.: We performed an analysis of hospitalizations involving thoracentesis procedures to determine whether the use of ultrasonographic (US) guidance is associated with differences in complications or hospital costs as compared with not using US guidance. METHODS.: We used the Premier hospital database to identify patients with ICD-9 coded thoracentesis in 2008. Use of US guidance was identified using CPT-4 codes. We performed univariate and multivariable analyses of cost data and adjusted for patient demographics, hospital characteristics, patient morbidity severity, and mortality. Logistic regression models were developed for pneumothorax and hemorrhage adverse events, controlling for patient demographics, morbidity severity, mortality, and hospital size. RESULTS.: Of 19,339 thoracentesis procedures, 46% were performed with US guidance. Mean total hospitalization costs were $11,786 (±$10,535) and $12,408 (±$13,157) for patients with and without US guidance, respectively (p < 0.001). Unadjusted risk of pneumothorax or hemorrhage was lower with US guidance (p = 0.019 and 0.078, respectively). Logistic regression analyses demonstrate that US is associated with a 16.3% reduction likelihood of pneumothorax (adjusted odds ratio 0.837, 95% CI: 0.73-0.96; p= 0.014), and 38.7% reduction in likelihood of hemorrhage (adjusted odds ratio 0.613, 95% CI: 0.36-1.04; p = 0.071). CONCLUSIONS.: US-guided thoracentesis is associated with lower total hospital stay costs and lower incidence of pneumothorax and hemorrhage. © 2011 Wiley Periodicals, Inc. J Clin Ultrasound, 2011. Copyright © 2011 Wiley Periodicals, Inc.

  16. The evaluation of the OSGLR algorithm for restructurable controls

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.

    1986-01-01

    The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.

  17. Team climate, intention to leave and turnover among hospital employees: prospective cohort study.

    PubMed

    Kivimäki, Mika; Vanhala, Anna; Pentti, Jaana; Länsisalmi, Hannakaisa; Virtanen, Marianna; Elovainio, Marko; Vahtera, Jussi

    2007-10-23

    In hospitals, the costs of employee turnover are substantial and intentions to leave among staff may manifest as lowered performance. We examined whether team climate, as indicated by clear and shared goals, participation, task orientation and support for innovation, predicts intention to leave the job and actual turnover among hospital employees. Prospective study with baseline and follow-up surveys (2-4 years apart). The participants were 6,441 (785 men, 5,656 women) hospital employees under the age of 55 at the time of follow-up survey. Logistic regression with generalized estimating equations was used as an analysis method to include both individual and work unit level predictors in the models. Among stayers with no intention to leave at baseline, lower self-reported team climate predicted higher likelihood of having intentions to leave at follow-up (odds ratio per 1 standard deviation decrease in team climate was 1.6, 95% confidence interval 1.4-1.8). Lower co-worker assessed team climate at follow-up was also association with such intentions (odds ratio 1.8, 95% confidence interval 1.4-2.4). Among all participants, the likelihood of actually quitting the job was higher for those with poor self-reported team climate at baseline. This association disappeared after adjustment for intention to leave at baseline suggesting that such intentions may explain the greater turnover rate among employees with low team climate. Improving team climate may reduce intentions to leave and turnover among hospital employees.

  18. Diagnostic accuracy: sensitivity and specificity of the ScreenAssist Lumbar Questionnaire in comparison with primary care provider tests and measures of low back pain: a pilot study

    PubMed Central

    Cunningham, Shala

    2013-01-01

    Objective: The purpose of this study was to estimate the diagnostic accuracy of the ScreenAssist Lumbar Questionnaire (SALQ) to determine the presence of non-musculoskeletal pain or emergent musculoskeletal pain, in terms of its sensitivity and specificity, when compared with the assessment and diagnosis made by primary care providers. Methods: Subjects were patients presenting to a primary care physician’s office with the main complaint of low back pain. SALQ data were collected within 24 hours of the appointment. A 2-month post-visit chart review was performed in order to compare scores and recommendations made by the questionnaire with the assessment and diagnosis made by the physician. Results: The SALQ demonstrated a sensitivity of 100% (95% CI = 0.445–1.0) and specificity of 92% (95% CI = 0.831–0.920). The negative likelihood ratio was 0.11 (95% CI = 0.01–1.54) and the positive likelihood ratio was 9.36 (95% CI = 2.78–32). If the SALQ was positive, the post-test probability was 0.60. If the SALQ was negative, the post-test probability was 0.017. Discussion: Results from this study suggest that the SALQ can be used as an adjunct to the subjective history taking in a physical therapy evaluation to assist in the recognition of non-musculoskeletal or emergent musculoskeletal conditions requiring referral. PMID:24421613

  19. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  20. Diagnostic Performance of Bronchoalveolar Lavage Fluid CD4/CD8 Ratio for Sarcoidosis: A Meta-analysis.

    PubMed

    Shen, Yongchun; Pang, Caishuang; Wu, Yanqiu; Li, Diandian; Wan, Chun; Liao, Zenglin; Yang, Ting; Chen, Lei; Wen, Fuqiang

    2016-06-01

    The usefulness of bronchoalveolar lavage fluid (BALF) CD4/CD8 ratio for diagnosing sarcoidosis has been reported in many studies with variable results. Therefore, we performed a meta-analysis to estimate the overall diagnostic accuracy of BALF CD4/CD8 ratio based on the bulk of published evidence. Studies published prior to June 2015 and indexed in PubMed, OVID, Web of Science, Scopus and other databases were evaluated for inclusion. Data on sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were pooled from included studies. Summary receiver operating characteristic (SROC) curves were used to summarize overall test performance. Deeks's funnel plot was used to detect publication bias. Sixteen publications with 1885 subjects met our inclusion criteria and were included in this meta-analysis. Summary estimates of the diagnostic performance of the BALF CD4/CD8 ratio were as follows: sensitivity, 0.70 (95%CI 0.64-0.75); specificity, 0.83 (95%CI 0.78-0.86); PLR, 4.04 (95%CI 3.13-5.20); NLR, 0.36 (95%CI 0.30-0.44); and DOR, 11.17 (95%CI 7.31-17.07). The area under the SROC curve was 0.84 (95%CI 0.81-0.87). There was no evidence of publication bias. Measuring the BALF CD4/CD8 ratio may assist in the diagnosis of sarcoidosis when interpreted in parallel with other diagnostic factors. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Unified framework to evaluate panmixia and migration direction among multiple sampling locations.

    PubMed

    Beerli, Peter; Palczewski, Michal

    2010-05-01

    For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

  2. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  3. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    PubMed

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  4. Likelihood-based methods for evaluating principal surrogacy in augmented vaccine trials.

    PubMed

    Liu, Wei; Zhang, Bo; Zhang, Hui; Zhang, Zhiwei

    2017-04-01

    There is growing interest in assessing immune biomarkers, which are quick to measure and potentially predictive of long-term efficacy, as surrogate endpoints in randomized, placebo-controlled vaccine trials. This can be done under a principal stratification approach, with principal strata defined using a subject's potential immune responses to vaccine and placebo (the latter may be assumed to be zero). In this context, principal surrogacy refers to the extent to which vaccine efficacy varies across principal strata. Because a placebo recipient's potential immune response to vaccine is unobserved in a standard vaccine trial, augmented vaccine trials have been proposed to produce the information needed to evaluate principal surrogacy. This article reviews existing methods based on an estimated likelihood and a pseudo-score (PS) and proposes two new methods based on a semiparametric likelihood (SL) and a pseudo-likelihood (PL), for analyzing augmented vaccine trials. Unlike the PS method, the SL method does not require a model for missingness, which can be advantageous when immune response data are missing by happenstance. The SL method is shown to be asymptotically efficient, and it performs similarly to the PS and PL methods in simulation experiments. The PL method appears to have a computational advantage over the PS and SL methods.

  5. Centralization as a predictor of provocation discography results in chronic low back pain, and the influence of disability and distress on diagnostic power.

    PubMed

    Laslett, Mark; Oberg, Birgitta; Aprill, Charles N; McDonald, Barry

    2005-01-01

    The "centralization phenomenon" (CP) is the progressive retreat of referred pain towards the spinal midline in response to repeated movement testing (a McKenzie evaluation). A previous study suggested that it may have utility in the clinical diagnosis of discogenic pain and may assist patient selection for discography and specific treatments for disc pain. Estimation of the diagnostic predictive power of centralization and the influence of disability and patient distress on diagnostic performance, using provocation discography as a criterion standard for diagnosis, in chronic low back pain patients. This study was a prospective, blinded, concurrent, reference standard-related validity design carried out in a private radiology clinic specializing in diagnosis of chronic spinal pain. Consecutive patients with persistent low back pain were referred to the study clinic by orthopedists and other medical specialists for interventional radiological diagnostic procedures. Patients were typically disabled and displayed high levels of psychosocial distress. The sample included patients with previous lumbar surgery, and most had unsuccessful conservative therapies previously. results of provocation discography. The CP. Psychometric evaluation: Roland-Morris, Zung, Modified Somatic Perception questionnaires, Distress Risk Assessment Method, and 100-mm visual analog scales for pain intensity. Patients received a single physical therapy examination, followed by lumbar provocation discography. Sensitivity, specificity, and likelihood ratios of the CP were estimated in the group as a whole and in subgroups defined by psychometric measures. A total of 107 patients received the clinical examination and discography at two or more levels and post-discography computed tomography. Thirty-eight could not tolerate a full physical examination and were excluded from the main analysis. Disability and pain intensity ratings were high, and distress was common. Sensitivity, specificity, and positive likelihood ratios for centralization observed during repeated movement testing for pain distribution and intensity changes were 40%, 94%, and 6.9 respectively. In the presence of severe disability, sensitivity, specificity, and positive likelihood ratios were 46%, 80%, 3.2 and for distress, 45%, 89%, 4.1. In the subgroups with moderate, minimal, or no disability, sensitivity and specificity were 37%, 100% and for no or minimal distress 35%, 100%. Centralization is highly specific to positive discography but specificity is reduced in the presence of severe disability or psychosocial distress.

  6. Theory and Measurement of Partially Correlated Persistent Scatterers

    NASA Astrophysics Data System (ADS)

    Lien, J.; Zebker, H. A.

    2011-12-01

    Interferometric synthetic aperture radar (InSAR) time-series methods can effectively estimate temporal surface changes induced by geophysical phenomena. However, such methods are susceptible to decorrelation due to spatial and temporal baselines (radar pass separation), changes in orbital geometries, atmosphere, and noise. These effects limit the number of interferograms that can be used for differential analysis and obscure the deformation signal. InSAR decorrelation effects may be ameliorated by exploiting pixels that exhibit phase stability across the stack of interferograms. These so-called persistent scatterer (PS) pixels are dominated by a single point-like scatterer that remains phase-stable over the spatial and temporal baseline. By identifying a network of PS pixels for use in phase unwrapping, reliable deformation measurements may be obtained even in areas of low correlation, where traditional InSAR techniques fail to produce useful observations. PS identification is challenging in natural terrain, due to low reflectivity and few corner reflectors. Shanker and Zebker [1] proposed a PS pixel selection technique based on maximum-likelihood estimation of the associated signal-to-clutter ratio (SCR). In this study, we further develop the underlying theory for their technique, starting from statistical backscatter characteristics of PS pixels. We derive closed-form expressions for the spatial, rotational, and temporal decorrelation of PS pixels as a function of baseline and signal-to-clutter ratio. We show that previous decorrelation and critical baseline expressions [2] are limiting cases of our result. We then describe a series of radar scattering simulations and show that the simulated decorrelation matches well with our analytic results. Finally, we use our decorrelation expressions with maximum-likelihood SCR estimation to analyze an area of the Hayward Fault Zone in the San Francisco Bay Area. A series of 38 images of the area were obtained from C-band ERS radar satellite passes between May 1995 and December 2000. We show that the interferogram stack exhibits PS decorrelation trends in agreement with our analytic results. References 1. P. Shanker and H. Zebker, "Persistent scatterer selection using maximum likelihood estimation," Geophysical Research Letters, Vol. 34, L22301, 2007. 2. H. Zebker and J. Villasenor, "Decorrelation in Interferometric Radar Echos," IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 5, Sept. 1992.

  7. info-gibbs: a motif discovery algorithm that directly optimizes information content during sampling.

    PubMed

    Defrance, Matthieu; van Helden, Jacques

    2009-10-15

    Discovering cis-regulatory elements in genome sequence remains a challenging issue. Several methods rely on the optimization of some target scoring function. The information content (IC) or relative entropy of the motif has proven to be a good estimator of transcription factor DNA binding affinity. However, these information-based metrics are usually used as a posteriori statistics rather than during the motif search process itself. We introduce here info-gibbs, a Gibbs sampling algorithm that efficiently optimizes the IC or the log-likelihood ratio (LLR) of the motif while keeping computation time low. The method compares well with existing methods like MEME, BioProspector, Gibbs or GAME on both synthetic and biological datasets. Our study shows that motif discovery techniques can be enhanced by directly focusing the search on the motif IC or the motif LLR. http://rsat.ulb.ac.be/rsat/info-gibbs

  8. Factors Associated With Breastfeeding Duration Among Connecticut Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) Participants

    PubMed Central

    Haughton, Jannett; Gregorio, David; Pérez-Escamilla, Rafael

    2011-01-01

    This retrospective study aimed to identify factors associated with breastfeeding duration among women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) of Hartford, Connecticut. The authors included mothers whose children were younger than 5 years and had stopped breastfeeding (N = 155). Women who had planned their pregnancies were twice as likely as those who did not plan them to breastfeed for more than 6 months (odds ratio, 2.15; 95% confidence interval, 1.00–4.64). One additional year of maternal age was associated with a 9% increase on the likelihood of breastfeeding for more than 6 months (odds ratio, 1.09; 95% confidence interval, 1.02–1.17). Time in the United States was inversely associated with the likelihood of breastfeeding for more than 6 months (odds ratio, 0.96; 95% confidence interval, 0.92–0.99). Return to work, sore nipples, lack of access to breast pumps, and free formula provided by WIC were identified as breastfeeding barriers. Findings can help WIC improve its breastfeeding promotion efforts. PMID:20689103

  9. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    NASA Astrophysics Data System (ADS)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  10. The sensitivity, specificity, predictive values, and likelihood ratios of fecal occult blood test for the detection of colorectal cancer in hospital settings.

    PubMed

    Elsafi, Salah H; Alqahtani, Norah I; Zakary, Nawaf Y; Al Zahrani, Eidan M

    2015-01-01

    To study the performance of a single test using two fecal occult blood tests with colonoscopy for the detection of colorectal cancer (CRC) for the first time in Saudi Arabia to determine possible implications for the anticipated colorectal screening program. We compared the performance of guaiac and immunochemical fecal occult blood tests for the detection of CRC among patients of 50-74 years old attending two hospitals in the Eastern Region of Saudi Arabia. Samples of feces were collected from 257 asymptomatic patients and 20 cases of confirmed CRC, and they were tested simultaneously by the guaiac-based occult blood test and monoclonal antibody-based immunoassay kit. Colonoscopy was performed on all participants and the results were statistically analyzed with both positive and negative occult blood tests of both methods. Of the 277 subjects, 79 tested positive for occult blood with at least one method. Overall, the number of those with an occult blood-positive result by both tests was 39 (14.1%), while for 198 (71.5%), both tests were negative (P<0.0001); 40 (14.4%) samples showed a discrepant result. Colonoscopy data were obtained for all 277 patients. A total of three invasive cancers were detected among the screening group. Of the three, the guaiac test detected two cases, while the immunochemical test detected three of them. Of the 20 control cases, the guaiac test detected 13 CRC cases (P=0.03), while the immunochemical test detected 16 of them (P<0.0001). The sensitivity of guaiac and immunochemical tests for the detection of CRC in the screening group was 50.00% (95% confidence interval [CI] =6.76-93.24) and 75.00% (95% CI =19.41-99.37), respectively. For comparison, the sensitivity of the guaiac fecal occult blood test for detecting CRC among the control group was 65.00% (95% CI =40.78-84.61) while that of FIT was 80.00% (95% CI =56.34-94.27). The specificity of the guaiac and immunoassay tests was 77.87% (95% CI =72.24-82.83) and 90.12% (95% CI =85.76-93.50), respectively. The positive likelihood ratio of guaiac and immunochemical tests for the detection of CRC was 2.26 (95% CI =0.83-6.18) and 7.59 (95% CI =3.86-14.94), whereas the negative likelihood ratio was 0.64 (95% CI =0.24-1.71) and 0.28 (95% CI =0.05-1.52), respectively. The positive predictive values of guaiac and immunochemical tests were 3.45% (95% CI =0.426-11.91) and 10.71% (95% CI =2.27-28.23), respectively. There was no marked difference in the negative predictive values for both methods. The sensitivity of the fecal occult blood test by FIT was significantly higher for stages III and IV colorectal cancer than for stages I and II (P=0.01) and it was insignificant for the guaiac fecal occult blood test (P=0.07). In areas where other advance screening methods of CRC are not feasible, the use of FIT can be considered.

  11. Identifying disease polymorphisms from case-control genetic association data.

    PubMed

    Park, L

    2010-12-01

    In case-control association studies, it is typical to observe several associated polymorphisms in a gene region. Often the most significantly associated polymorphism is considered to be the disease polymorphism; however, it is not clear whether it is the disease polymorphism or there is more than one disease polymorphism in the gene region. Currently, there is no method that can handle these problems based on the linkage disequilibrium (LD) relationship between polymorphisms. To distinguish real disease polymorphisms from markers in LD, a method that can detect disease polymorphisms in a gene region has been developed. Relying on the LD between polymorphisms in controls, the proposed method utilizes model-based likelihood ratio tests to find disease polymorphisms. This method shows reliable Type I and Type II error rates when sample sizes are large enough, and works better with re-sequenced data. Applying this method to fine mapping using re-sequencing or dense genotyping data would provide important information regarding the genetic architecture of complex traits.

  12. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  13. Choosing relatives for DNA identification of missing persons.

    PubMed

    Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit

    2011-01-01

    DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships. © 2010 American Academy of Forensic Sciences.

  14. Bayesian framework for the evaluation of fiber evidence in a double murder--a case report.

    PubMed

    Causin, Valerio; Schiavone, Sergio; Marigo, Antonio; Carresi, Pietro

    2004-05-10

    Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.

  15. Use and interpretation of logistic regression in habitat-selection studies

    USGS Publications Warehouse

    Keating, Kim A.; Cherry, Steve

    2004-01-01

     Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.

  16. Assessing compatibility of direct detection data: halo-independent global likelihood analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.

    2016-10-18

    We present two different halo-independent methods to assess the compatibility of several direct dark matter detection data sets for a given dark matter model using a global likelihood consisting of at least one extended likelihood and an arbitrary number of Gaussian or Poisson likelihoods. In the first method we find the global best fit halo function (we prove that it is a unique piecewise constant function with a number of down steps smaller than or equal to a maximum number that we compute) and construct a two-sided pointwise confidence band at any desired confidence level, which can then be comparedmore » with those derived from the extended likelihood alone to assess the joint compatibility of the data. In the second method we define a “constrained parameter goodness-of-fit” test statistic, whose p-value we then use to define a “plausibility region” (e.g. where p≥10%). For any halo function not entirely contained within the plausibility region, the level of compatibility of the data is very low (e.g. p<10%). We illustrate these methods by applying them to CDMS-II-Si and SuperCDMS data, assuming dark matter particles with elastic spin-independent isospin-conserving interactions or exothermic spin-independent isospin-violating interactions.« less

  17. Race and weight change in US women: the roles of socioeconomic and marital status.

    PubMed Central

    Kahn, H S; Williamson, D F; Stevens, J A

    1991-01-01

    BACKGROUND. The prevalence of overweight among Black women in the US is higher than among White women, but the causes are unknown. METHODS. We examined the weight change for 514 Black and 2,770 White women who entered the first Health and Nutrtion Examination Survey (1971-75) at ages 25-44 years and were weighed again a decade later. We used multivariate analyses to estimate the weight-change effectgs associated with race, family income, education, and marital change. RESULTS. After multiple adjustments, Black race, education below college level, and becoming married during the follow-up interval were each independently associated with an increased mean weight change. Using multivariate logistic analyses, Black race was not independently associated with an increased risk of major weight gain (change greater than or equal to +13 kg), but it was associated with a reduced likelihood of major weight loss (change less than or equal to -7 kg) (odds ratio - 0.64 [95% CI -0.41, 0.97])]. Very low family income was independently associated with the likelihood of both major weight gain (OR - 1.71 [95% CI - 1.15, 2.55]) and major weight loss (OR - 1.86 [95% CI - 1.18, 2.95]). CONCLUSIONS. Amont US women, Black race is independently associated with a reduced likelihood of major weight loss, but not with major weight gain. Women at greatest risk of weight gain are those with education below college level, those entering marriage, and those with very low family income. PMID:2036117

  18. Support for Relatives Bereaved by Psychiatric Patient Suicide: National Confidential Inquiry Into Suicide and Homicide Findings.

    PubMed

    Pitman, Alexandra L; Hunt, Isabelle M; McDonnell, Sharon J; Appleby, Louis; Kapur, Navneet

    2017-04-01

    International suicide prevention strategies recommend providing support to families bereaved by suicide. The study objectives were to measure the proportion of cases in which psychiatric professionals contact next of kin after a patient's suicide and to investigate whether specific, potentially stigmatizing patient characteristics influence whether the family is contacted. Annual survey data from England and Wales (2003-2012) were used to identify 11,572 suicide cases among psychiatric patients. Multivariate regression analysis was used to describe the association between specific covariates (chosen on the basis of clinical judgment and the published literature) and the probability that psychiatric staff would contact bereaved relatives of the deceased. Relatives were not contacted after the death in 33% of cases. Contrary to the hypothesis, a violent method of suicide was independently associated with greater likelihood of contact with relatives (adjusted odds ratio=1.67). Four patient factors (forensic history, unemployment, and primary diagnosis of alcohol or drug dependence or misuse) were independently associated with less likelihood of contact with relatives. Patients' race-ethnicity and recent alcohol or drug misuse were not associated with contact with relatives. Four stigmatizing patient-related factors reduced the likelihood of contacting next of kin after patient suicide, suggesting inequitable access to support after a potentially traumatic bereavement. Given the association of suicide bereavement with suicide attempt, and the possibility of relatives' shared risk factors for suicide, British psychiatric services should provide more support to relatives after patient suicide.

  19. Relationship between two indicators of coronary risk estimated by the Framingham Risk Score and the number of metabolic syndrome components in Japanese male manufacturing workers.

    PubMed

    Kawada, Tomoyuki; Otsuka, Toshiaki; Inagaki, Hirofumi; Wakayama, Yoko; Li, Qing; Katsumata, Masao

    2009-10-01

    The Framingham Risk Score (FRS) has frequently been used in the United States to predict the 10-year risk of coronary heart disease (CHD). Components of the metabolic syndrome and several lifestyle factors have also been evaluated to estimate the risk of CHD. To determine the relationship between the FRS and components of metabolic syndrome as coronary risk indicators, the authors conducted a cross-sectional study of 2,619 Japanese male workers, ranging in age from 40 to 64 years, at a single workplace. Although the estimation by the FRS and metabolic syndrome involved some different factors, significant association of the risk estimated by the 2 methods was observed. When logistic regression analysis was conducted with adjustment for several lifestyle factors, the FRS and serum insulin were found to be significantly associated with the risk of likelihood of metabolic syndrome. The odds ratios and 95% confidence intervals of FRS by per standard deviation increment and serum insulin by increasing 1 microIU/mL for the prediction of metabolic syndrome were 2.50 (2.17-2.88) and 1.24 (1.20-1.27), respectively. A preventive effect of abstaining from drinking every day and eating breakfast almost daily against the likelihood of metabolic syndrome was also observed. In conclusion, the FRS and insulin were found to be significantly associated with the risk of likelihood of metabolic syndrome, even after controlling for weight change.

  20. Physical and cognitive capability in mid-adulthood as determinants of retirement and extended working life in a British cohort study.

    PubMed

    Stafford, Mai; Cooper, Rachel; Cadar, Dorina; Carr, Ewan; Murray, Emily; Richards, Marcus; Stansfeld, Stephen; Zaninotto, Paola; Head, Jenny; Kuh, Diana

    2017-01-01

    Objective Policy in many industrialized countries increasingly emphasizes extended working life. We examined associations between physical and cognitive capability in mid-adulthood and work in late adulthood. Methods Using self-reported physical limitations and performance-based physical and cognitive capability at age 53, assessed by trained nurses from the Medical Research Council (MRC) National Survey of Health and Development, we examined prospective associations with extended working (captured by age at and reason for retirement from main occupation, bridge employment in paid work after retirement from the main occupation, and voluntary work participation) up to age 68 among >2000 men and women. Results Number of reported physical limitations at age 53 was associated with higher likelihood of retiring for negative reasons and lower likelihood of participating in bridge employment, adjusted for occupational class, education, partner's employment, work disability at age 53, and gender. Better performance on physical and cognitive tests was associated with greater likelihood of participating in bridge or voluntary work. Cognitive capability in the top 10% compared with the middle 80% of the distribution was associated with an odds ratio of bridge employment of 1.71 [95% confidence interval (95% CI) 1.21-2.42]. Conclusions The possibility for an extending working life is less likely to be realized by those with poorer midlife physical or cognitive capability, independently of education, and social class. Interventions to promote capability, starting in mid-adulthood or earlier, could have long-term consequences for extending working.

  1. Diagnostic performance of instantaneous wave-free ratio for the evaluation of coronary stenosis severity confirmed by fractional flow reserve: A PRISMA-compliant meta-analysis of randomized studies.

    PubMed

    Man, Wanrong; Hu, Jianqiang; Zhao, Zhijing; Zhang, Mingming; Wang, Tingting; Lin, Jie; Duan, Yu; Wang, Ling; Wang, Haichang; Sun, Dongdong; Li, Yan

    2016-09-01

    The instantaneous wave-free ratio (iFR) is a new vasodilator-free index of coronary stenosis severity. The aim of this meta-analysis is to assess the diagnostic performance of iFR for the evaluation of coronary stenosis severity with fractional flow reserve as standard reference. We searched PubMed, EMBASE, CENTRAL, ProQuest, Web of Science, and International Clinical Trials Registry Platform (ICTRP) for publications concerning the diagnostic value of iFR. We used a random-effects covariate to synthesize the available data of sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), and diagnostic odds ratio (DOR). Overall test performance was summarized by the summary receiver operating characteristic curve (sROC) and the area under the curve (AUC). Eight studies with 1611 subjects were included in the meta-analysis. The pooled sensitivity, specificity, LR+, LR-, and DOR for iFR were respectively 73.3% (70.1-76.2%), 86.4% (84.3-88.3%), 5.71 (4.43-7.37), 0.29 (0.22-0.38), and 20.54 (16.11-26.20). The area under the summary receiver operating characteristic curves for iFR was 0.8786. No publication bias was identified. The available evidence suggests that iFR may be a new, simple, and promising technology for coronary stenosis physiological assessment.

  2. Consistency of Rasch Model Parameter Estimation: A Simulation Study.

    ERIC Educational Resources Information Center

    van den Wollenberg, Arnold L.; And Others

    1988-01-01

    The unconditional--simultaneous--maximum likelihood (UML) estimation procedure for the one-parameter logistic model produces biased estimators. The UML method is inconsistent and is not a good alternative to conditional maximum likelihood method, at least with small numbers of items. The minimum Chi-square estimation procedure produces unbiased…

  3. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  4. Accuracy of Urine Color to Detect Equal to or Greater Than 2% Body Mass Loss in Men.

    PubMed

    McKenzie, Amy L; Muñoz, Colleen X; Armstrong, Lawrence E

    2015-12-01

    Clinicians and athletes can benefit from field-expedient measurement tools, such as urine color, to assess hydration state; however, the diagnostic efficacy of this tool has not been established. To determine the diagnostic accuracy of urine color assessment to distinguish a hypohydrated state (≥2% body mass loss [BML]) from a euhydrated state (<2% BML) after exercise in a hot environment. Controlled laboratory study. Environmental chamber in a laboratory. Twenty-two healthy men (age = 22 ± 3 years, height = 180.4 ± 8.7 cm, mass = 77.9 ± 12.8 kg, body fat = 10.6% ± 4.6%). Participants cycled at 68% ± 6% of their maximal heart rates in a hot environment (36°C ± 1°C) for 5 hours or until 5% BML was achieved. At the point of each 1% BML, we assessed urine color. Diagnostic efficacy of urine color was assessed using receiver operating characteristic curve analysis, sensitivity, specificity, and likelihood ratios. Urine color was useful as a diagnostic tool to identify hypohydration after exercise in the heat (area under the curve = 0.951, standard error = 0.022; P < .001). A urine color of 5 or greater identified BML ≥2% with 88.9% sensitivity and 84.8% specificity (positive likelihood ratio = 5.87, negative likelihood ratio = 0.13). Under the conditions of acute dehydration due to exercise in a hot environment, urine color assessment can be a valid, practical, inexpensive tool for assessing hydration status. Researchers should examine the utility of urine color to identify a hypohydrated state under different BML conditions.

  5. Accuracy of clinical diagnosis versus the World Health Organization case definition in the Amoy Garden SARS cohort.

    PubMed

    Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert

    2003-11-01

    To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.

  6. Prevalence of Visual Impairment among 4- to 6-years-old Children in Khon Kaen City Municipality, Thailand.

    PubMed

    Wongwai, Phanthipha; Anupongongarch, Pacharapan; Suwannaraj, Sirinya; Asawaphureekorn, Somkiat

    2016-08-01

    To evaluate the prevalence of visual impairment of children aged four to six years in Khon Kaen City Municipality, Thailand. The visual acuity test was performed on 1,286 children in kindergarten schools located in Khon Kaen Municipality. The first test of visual acuity was done by trained teachers and the second test by the pediatric ophthalmologist. The prevalence of visual impairment of both tests was recorded including sensitivity, specificity, likelihood ratio, and predictive value of the test by teachers. The causes of visual impairment were also recorded. There were 39 children with visual impairment from the test by the teacher and 12 children from the test by the ophthalmologist. Myopia is the single cause of visual impairment. Mean spherical equivalence is 1.375 diopters (SD = 0.53). Median spherical equivalence is 1.375 diopters (minimum = 0.5, maximum =4). The detection of visual impairment by trained teachers had a sensitivity of 1.00 (95% CI 0.76-1.00), specificity of 0.98 (95% CI 0.97-0.99), likelihood ratio for a positive test 44.58 (95% CI 30.32-65.54), likelihood ratio for a negative test 0.04 (95% CI 0.003-0.60), positive predictive value of 0.31 (95% CI 0.19-0.47), and negative predictive value of 1.00 (95% CI 0.99-1.00). The prevalence of visual impairment among children aged four to six year old is 0.9%. Trained teachers can be examiners for screening purpose.

  7. A data fusion approach to indications and warnings of terrorist attacks

    NASA Astrophysics Data System (ADS)

    McDaniel, David; Schaefer, Gregory

    2014-05-01

    Indications and Warning (I&W) of terrorist attacks, particularly IED attacks, require detection of networks of agents and patterns of behavior. Social Network Analysis tries to detect a network; activity analysis tries to detect anomalous activities. This work builds on both to detect elements of an activity model of terrorist attack activity - the agents, resources, networks, and behaviors. The activity model is expressed as RDF triples statements where the tuple positions are elements or subsets of a formal ontology for activity models. The advantage of a model is that elements are interdependent and evidence for or against one will influence others so that there is a multiplier effect. The advantage of the formality is that detection could occur hierarchically, that is, at different levels of abstraction. The model matching is expressed as a likelihood ratio between input text and the model triples. The likelihood ratio is designed to be analogous to track correlation likelihood ratios common in JDL fusion level 1. This required development of a semantic distance metric for positive and null hypotheses as well as for complex objects. The metric uses the Web 1Terabype database of one to five gram frequencies for priors. This size requires the use of big data technologies so a Hadoop cluster is used in conjunction with OpenNLP natural language and Mahout clustering software. Distributed data fusion Map Reduce jobs distribute parts of the data fusion problem to the Hadoop nodes. For the purposes of this initial testing, open source models and text inputs of similar complexity to terrorist events were used as surrogates for the intended counter-terrorist application.

  8. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  9. Return to military weight standards after pregnancy in active duty working women: comparison of marine corps vs. navy.

    PubMed

    Greer, Joy A; Zelig, Craig M; Choi, Kenny K; Rankins, Nicole Calloway; Chauhan, Suneet P; Magann, Everett F

    2012-08-01

    To compare the likelihood of being within weight standards before and after pregnancy between United States Marine Corps (USMC) and Navy (USN) active duty women (ADW). ADW with singleton gestations who delivered at a USMC base were followed for 6 months to determine likelihood of returning to military weight standards. Odds ratio (OR), adjusted odds ratio (AOR) and 95% confidence intervals were calculated; p < 0.05 was considered significant. Similar proportions of USN and USMC ADW were within body weight standards one year prior to pregnancy (79%, 97%) and at first prenatal visit (69%, 96%), respectively. However, USMC ADW were significantly more likely to be within body weight standards at 3 months (AOR 4.30,1.28-14.43) and 6 months after delivery (AOR 9.94, 1.53-64.52) than USN ADW. Weight gained during pregnancy did not differ significantly for the two groups (40.4 lbs vs 44.2 lbs, p = 0.163). The likelihood of spontaneous vaginal delivery was significantly higher (OR 2.52, 1.20-5.27) and the mean birth weight was significantly lower (p = 0.0036) among USMC ADW as compared to USN ADW. Being within weight standards differs significantly for USMC and USN ADW after pregnancy.

  10. Female breast symptoms in patients attended in the family medicine practice.

    PubMed

    González-Pérez, Brian; Salas-Flores, Ricardo; Sosa-López, María Lucero; Barrientos-Guerrero, Carlos Eduardo; Hernández-Aguilar, Claudia Magdalena; Gómez-Contreras, Diana Edith; Sánchez-Garza, Jorge Arturo

    2013-01-01

    there are few studies on breast symptoms (BS) in patients attended at primary care units in Mexico. The aim was to determine the frequency and types of BS overall and by age-group and establish which BS were related to diagnosis of breast cancer. data from all female patients with a breast-disease-related diagnosis, attended from 2006 to 2010, at the Family Medicine Unit 38, were collected. The frequencies of BS were determined by four age-groups (< 19, 20-49, 50-69, > 70 years) and likelihood ratios for breast cancer for each breast-related symptom patient, with a 95 % confidence interval (CI). the most frequent BS in the study population were lump/mass (71.7 %) and breast pain (67.7 %) of all breast complaints, and they were more noted in women age group of 20-49 years. Overall, 120 women had breast cancer diagnosed with a median age of 53.51 + 12.7 years. Breast lump/mass had positive likelihood ratios for breast cancer 4.53 (95 % CI = 2.51-8.17) and breast pain had increased negative LR = 1.08 (95 % CI = 1.05-1.11). breast lump/mass was the predominant presenting complaint among females with breast symptoms in our primary care unit, and it was associated with elevated positive likelihood of breast cancer.

  11. Knowledge and risk perception of late effects among childhood cancer survivors and parents before and after visiting a childhood cancer survivor clinic.

    PubMed

    Cherven, Brooke; Mertens, Ann; Meacham, Lillian R; Williamson, Rebecca; Boring, Cathy; Wasilewski-Masker, Karen

    2014-01-01

    Survivors of childhood cancer are at risk for a variety of treatment-related late effects and require lifelong individualized surveillance for early detection of late effects. This study assessed knowledge and perceptions of late effects risk before and after a survivor clinic visit. Young adult survivors (≥ 16 years) and parents of child survivors (< 16 years) were recruited prior to initial visit to a cancer survivor program. Sixty-five participants completed a baseline survey and 50 completed both a baseline and follow-up survey. Participants were found to have a low perceived likelihood of developing a late effect of cancer therapy and many incorrect perceptions of risk for individual late effects. Low knowledge before clinic (odds ratio = 9.6; 95% confidence interval, 1.7-92.8; P = .02) and low perceived likelihood of developing a late effect (odds ratio = 18.7; 95% confidence interval, 2.7-242.3; P = .01) were found to predict low knowledge of late effect risk at follow-up. This suggests that perceived likelihood of developing a late effect is an important factor in the individuals' ability to learn about their risk and should be addressed before initiation of education. © 2014 by Association of Pediatric Hematology/Oncology Nurses.

  12. On the existence of maximum likelihood estimates for presence-only data

    USGS Publications Warehouse

    Hefley, Trevor J.; Hooten, Mevin B.

    2015-01-01

    It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.

  13. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  14. Population Synthesis of Radio and Gamma-ray Pulsars using the Maximum Likelihood Approach

    NASA Astrophysics Data System (ADS)

    Billman, Caleb; Gonthier, P. L.; Harding, A. K.

    2012-01-01

    We present the results of a pulsar population synthesis of normal pulsars from the Galactic disk using a maximum likelihood method. We seek to maximize the likelihood of a set of parameters in a Monte Carlo population statistics code to better understand their uncertainties and the confidence region of the model's parameter space. The maximum likelihood method allows for the use of more applicable Poisson statistics in the comparison of distributions of small numbers of detected gamma-ray and radio pulsars. Our code simulates pulsars at birth using Monte Carlo techniques and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and gamma-ray emission characteristics. We select measured distributions of radio pulsars from the Parkes Multibeam survey and Fermi gamma-ray pulsars to perform a likelihood analysis of the assumed model parameters such as initial period and magnetic field, and radio luminosity. We present the results of a grid search of the parameter space as well as a search for the maximum likelihood using a Markov Chain Monte Carlo method. We express our gratitude for the generous support of the Michigan Space Grant Consortium, of the National Science Foundation (REU and RUI), the NASA Astrophysics Theory and Fundamental Program and the NASA Fermi Guest Investigator Program.

  15. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    PubMed

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  16. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  17. Profile-Likelihood Approach for Estimating Generalized Linear Mixed Models with Factor Structures

    ERIC Educational Resources Information Center

    Jeon, Minjeong; Rabe-Hesketh, Sophia

    2012-01-01

    In this article, the authors suggest a profile-likelihood approach for estimating complex models by maximum likelihood (ML) using standard software and minimal programming. The method works whenever setting some of the parameters of the model to known constants turns the model into a standard model. An important class of models that can be…

  18. Effect of radiance-to-reflectance transformation and atmosphere removal on maximum likelihood classification accuracy of high-dimensional remote sensing data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1994-01-01

    Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.

  19. Staff gender ratio and aggression in a forensic psychiatric hospital.

    PubMed

    Daffern, Michael; Mayer, Maggie; Martin, Trish

    2006-06-01

    Gender balance in acute psychiatric inpatient units remains a contentious issue. In terms of maintaining staff and patient safety, 'balance' is often considered by ensuring there are 'sufficient' male nurses present on each shift. In an ongoing programme of research into aggression, the authors investigated reported incidents of patient aggression and examined the gender ratio on each shift over a 6-month period. Contrary to the popular notion that a particular gender ratio might have some relationship with the likelihood of aggressive incidents, there was no statistically significant difference in the proportion of male staff working on the shifts when there was an aggressive incident compared with the shifts when there was no aggressive incident. Further, when an incident did occur, the severity of the incident bore no relationship with the proportion of male staff working on the shift. Nor did the gender of the shift leader have an impact on the decision to seclude the patient or the likelihood of completing an incident form following an aggressive incident. Staff confidence in managing aggression may be influenced by the presence of male staff. Further, aspects of prevention and management may be influenced by staff gender. However, results suggest there is no evidence that the frequency or severity of aggression is influenced by staff gender ratio.

  20. An original approach was used to better evaluate the capacity of a prognostic marker using published survival curves.

    PubMed

    Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann

    2014-04-01

    Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. [Evidence-based cardiology: practical applications from epidemiology. III. Diagnostic capacity of a clinical test].

    PubMed

    Rodríguez-Escudero, Juan Pablo; López-Jiménez, Francisco; Trejo-Gutiérrez, Jorge F

    2011-01-01

    This article reviews different characteristics of validity in a clinical diagnostic test. In particular, we emphasize the likelihood ratio as an instrument that facilitates the use of epidemiologic concepts in clinical diagnosis.

  2. [Evaluation of T-SPOT.TB assay in the diagnosis of pulmonary tuberculosis within different age groups].

    PubMed

    Pan, Liping; Jia, Hongyan; Liu, Fei; Gao, Mengqiu; Sun, Huishan; Du, Boping; Sun, Qi; Xing, Aiying; Wei, Rongrong; Zhang, Zongde

    2015-12-01

    To evaluate the value of T-SPOT.TB assay in the diagnosis of pulmonary tuberculosis within different age groups. We analyzed 1 518 suspected pulmonary tuberculosis (PTB) patients who were admitted to the Beijing Chest Hospital from November 2012 to February 2014 and had valid T-SPOT.TB tests before anti-tuberculosis therapy. The 599 microbiologically and/or histopathologically-confirmed PTB patients (16-89 years old, 388 males and 211 females) and 235 non-TB patients (14-85 years old, 144 males and 91 females) were enrolled for the analysis of diagnostic performance of T-SPOT.TB, while patients with uncertain diagnosis or diagnosis based on clinical impression (n=684) were excluded from the analysis. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB were analyzed according to the final diagnosis. Furthermore, the diagnostic performance of T-SPOT.TB assay in the younger patients (14-59 years old) and elderly patients (60-89 years old) were also analyzed respectively. Categorical variables were compared by Pearson's Chi-square test, while continuous variables were compared by the Mann-Whitney U-test. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB in diagnosis of PTB were 90.1% (540/599), 65.5% (154/235), 86.9% (540/621), 72.3% (154/213), 2.61, and 0.15, respectively. The sensitivity and specificity of T-SPOT.TB assay were 92.6% (375/405) and 75.6% (99/131), respectively in the younger patients, and 85.0% (165/194), 52.9% (55/104) respectively in the elderly patients. The sensitivity and specificity of T-SPOT.TB assay in the younger patients were significantly higher than those in the elderly patients (P<0.01), and the spot forming cells in the younger PTB patients were significantly higher than in the elderly PTB patients [300 (126, 666)/10(6) PBMCs vs. 258 (79, 621)/10(6) PBMCs, P=0.037]. T-SPOT.TB is a promising test in the diagnosis of younger patients (14-59 years old) with suspected PTB, but the diagnostic performance in elderly patients (60-89 years old) is relatively reduced.

  3. An imbalance fault detection method based on data normalization and EMD for marine current turbines.

    PubMed

    Zhang, Milu; Wang, Tianzhen; Tang, Tianhao; Benbouzid, Mohamed; Diallo, Demba

    2017-05-01

    This paper proposes an imbalance fault detection method based on data normalization and Empirical Mode Decomposition (EMD) for variable speed direct-drive Marine Current Turbine (MCT) system. The method is based on the MCT stator current under the condition of wave and turbulence. The goal of this method is to extract blade imbalance fault feature, which is concealed by the supply frequency and the environment noise. First, a Generalized Likelihood Ratio Test (GLRT) detector is developed and the monitoring variable is selected by analyzing the relationship between the variables. Then, the selected monitoring variable is converted into a time series through data normalization, which makes the imbalance fault characteristic frequency into a constant. At the end, the monitoring variable is filtered out by EMD method to eliminate the effect of turbulence. The experiments show that the proposed method is robust against turbulence through comparing the different fault severities and the different turbulence intensities. Comparison with other methods, the experimental results indicate the feasibility and efficacy of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Simulation-Based Evaluation of Hybridization Network Reconstruction Methods in the Presence of Incomplete Lineage Sorting

    PubMed Central

    Kamneva, Olga K; Rosenberg, Noah A

    2017-01-01

    Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378

  5. Approximate likelihood calculation on a phylogeny for Bayesian estimation of divergence times.

    PubMed

    dos Reis, Mario; Yang, Ziheng

    2011-07-01

    The molecular clock provides a powerful way to estimate species divergence times. If information on some species divergence times is available from the fossil or geological record, it can be used to calibrate a phylogeny and estimate divergence times for all nodes in the tree. The Bayesian method provides a natural framework to incorporate different sources of information concerning divergence times, such as information in the fossil and molecular data. Current models of sequence evolution are intractable in a Bayesian setting, and Markov chain Monte Carlo (MCMC) is used to generate the posterior distribution of divergence times and evolutionary rates. This method is computationally expensive, as it involves the repeated calculation of the likelihood function. Here, we explore the use of Taylor expansion to approximate the likelihood during MCMC iteration. The approximation is much faster than conventional likelihood calculation. However, the approximation is expected to be poor when the proposed parameters are far from the likelihood peak. We explore the use of parameter transforms (square root, logarithm, and arcsine) to improve the approximation to the likelihood curve. We found that the new methods, particularly the arcsine-based transform, provided very good approximations under relaxed clock models and also under the global clock model when the global clock is not seriously violated. The approximation is poorer for analysis under the global clock when the global clock is seriously wrong and should thus not be used. The results suggest that the approximate method may be useful for Bayesian dating analysis using large data sets.

  6. Computation of nonparametric convex hazard estimators via profile methods.

    PubMed

    Jankowski, Hanna K; Wellner, Jon A

    2009-05-01

    This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.

  7. Convex Optimization over Classes of Multiparticle Entanglement

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Gühne, Otfried

    2018-02-01

    A well-known strategy to characterize multiparticle entanglement utilizes the notion of stochastic local operations and classical communication (SLOCC), but characterizing the resulting entanglement classes is difficult. Given a multiparticle quantum state, we first show that Gilbert's algorithm can be adapted to prove separability or membership in a certain entanglement class. We then present two algorithms for convex optimization over SLOCC classes. The first algorithm uses a simple gradient approach, while the other one employs the accelerated projected-gradient method. For demonstration, the algorithms are applied to the likelihood-ratio test using experimental data on bound entanglement of a noisy four-photon Smolin state [Phys. Rev. Lett. 105, 130501 (2010), 10.1103/PhysRevLett.105.130501].

  8. Planning applications in East Central Florida

    NASA Technical Reports Server (NTRS)

    Hannah, J. W. (Principal Investigator); Thomas, G. L.; Esparza, F.; Millard, J. J.

    1974-01-01

    The author has identified the following significant results. This is a study of applications of ERTS data to planning problems, especially as applicable to East Central Florida. The primary method has been computer analysis of digital data, with visual analysis of images serving to supplement the digital analysis. The principal method of analysis was supervised maximum likelihood classification, supplemented by density slicing and mapping of ratios of band intensities. Land-use maps have been prepared for several urban and non-urban sectors. Thematic maps have been found to be a useful form of the land-use maps. Change-monitoring has been found to be an appropriate and useful application. Mapping of marsh regions has been found effective and useful in this region. Local planners have participated in selecting training samples and in the checking and interpretation of results.

  9. Sieve estimation in a Markov illness-death process under dual censoring.

    PubMed

    Boruvka, Audrey; Cook, Richard J

    2016-04-01

    Semiparametric methods are well established for the analysis of a progressive Markov illness-death process observed up to a noninformative right censoring time. However, often the intermediate and terminal events are censored in different ways, leading to a dual censoring scheme. In such settings, unbiased estimation of the cumulative transition intensity functions cannot be achieved without some degree of smoothing. To overcome this problem, we develop a sieve maximum likelihood approach for inference on the hazard ratio. A simulation study shows that the sieve estimator offers improved finite-sample performance over common imputation-based alternatives and is robust to some forms of dependent censoring. The proposed method is illustrated using data from cancer trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  11. Quasi-Maximum Likelihood Estimation of Structural Equation Models with Multiple Interaction and Quadratic Effects

    ERIC Educational Resources Information Center

    Klein, Andreas G.; Muthen, Bengt O.

    2007-01-01

    In this article, a nonlinear structural equation model is introduced and a quasi-maximum likelihood method for simultaneous estimation and testing of multiple nonlinear effects is developed. The focus of the new methodology lies on efficiency, robustness, and computational practicability. Monte-Carlo studies indicate that the method is highly…

  12. Bias and Efficiency in Structural Equation Modeling: Maximum Likelihood versus Robust Methods

    ERIC Educational Resources Information Center

    Zhong, Xiaoling; Yuan, Ke-Hai

    2011-01-01

    In the structural equation modeling literature, the normal-distribution-based maximum likelihood (ML) method is most widely used, partly because the resulting estimator is claimed to be asymptotically unbiased and most efficient. However, this may not hold when data deviate from normal distribution. Outlying cases or nonnormally distributed data,…

  13. Five Methods for Estimating Angoff Cut Scores with IRT

    ERIC Educational Resources Information Center

    Wyse, Adam E.

    2017-01-01

    This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…

  14. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  15. The Novaya Zemlya Event of 31 December 1992 and Seismic Identification Issues: Annual Seismic Research Symposium (15th) Held in Vail, Colorado on 8-10 September 1993

    DTIC Science & Technology

    1993-09-10

    1993). A bootstrap generalizedlikelihood ratio test in discriminant analysis, Proc. 15th Annual Seismic Research Symposium, in press. I Hedlin, M., J... ratio indicate that the event does not belong to the first class. The bootstrap technique is used here as well to set the critical value of the test ...Methodist University. Baek, J., H. L. Gray, W. A. Woodward and M.D. Fisk (1993). A Bootstrap Generalized Likelihood Ratio Test in Discriminant

  16. USE OF DIAGNODENT® FOR DIAGNOSIS OF NON-CAVITATED OCCLUSALDENTIN CARIES

    PubMed Central

    Costa, Ana Maria; de Paula, Lilian Marly; Bezerra, Ana Cristina Barreto

    2008-01-01

    The purpose of this study was to evaluate the use of a laser fluorescence device for detection of occlusal caries in permanent teeth. One hundred and ninety-nine non-cavitated teeth from 26 patients aged 10 to 13 years were selected. After dental prophylaxis, two previously calibrated dentists examined the teeth. Visual inspection, radiographic examination and laser measurements were performed under standardized conditions. The validation method was cavity preparation with a small cone-shaped diamond bur, when the two examiners agreed about the presence of dentin caries. It was found that the laser detection method produced high values of sensitivity (0.93) and specificity (0.75) and a moderate positive predictive value (0.63). The laser device showed the lowest value of likelihood ratio (3.68). Kappa coefficient showed good repeatability for all methods. Although the laser device had an acceptable performance, this equipment should be used as an adjunct method to visual inspection to avoid false positive results. PMID:19089284

  17. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  18. Estimating relative risks for common outcome using PROC NLP.

    PubMed

    Yu, Binbing; Wang, Zhuoqiao

    2008-05-01

    In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.

  19. Peer-driven contraceptive choices and preferences for contraceptive methods among students of tertiary educational institutions in Enugu, Nigeria.

    PubMed

    Iyoke, Ca; Ezugwu, Fo; Lawani, Ol; Ugwu, Go; Ajah, Lo; Mba, Sg

    2014-01-01

    To describe the methods preferred for contraception, evaluate preferences and adherence to modern contraceptive methods, and determine the factors associated with contraceptive choices among tertiary students in South East Nigeria. A questionnaire-based cross-sectional study of sexual habits, knowledge of contraceptive methods, and patterns of contraceptive choices among a pooled sample of unmarried students from the three largest tertiary educational institutions in Enugu city, Nigeria was done. Statistical analysis involved descriptive and inferential statistics at the 95% level of confidence. A total of 313 unmarried students were studied (194 males; 119 females). Their mean age was 22.5±5.1 years. Over 98% of males and 85% of females made their contraceptive choices based on information from peers. Preferences for contraceptive methods among female students were 49.2% for traditional methods of contraception, 28% for modern methods, 10% for nonpharmacological agents, and 8% for off-label drugs. Adherence to modern contraceptives among female students was 35%. Among male students, the preference for the male condom was 45.2% and the adherence to condom use was 21.7%. Multivariate analysis showed that receiving information from health personnel/media/workshops (odds ratio 9.54, 95% confidence interval 3.5-26.3), health science-related course of study (odds ratio 3.5, 95% confidence interval 1.3-9.6), and previous sexual exposure prior to university admission (odds ratio 3.48, 95% confidence interval 1.5-8.0) all increased the likelihood of adherence to modern contraceptive methods. An overwhelming reliance on peers for contraceptive information in the context of poor knowledge of modern methods of contraception among young people could have contributed to the low preferences and adherence to modern contraceptive methods among students in tertiary educational institutions. Programs to reduce risky sexual behavior among these students may need to focus on increasing the content and adequacy of contraceptive information held by people through regular health worker-led, on-campus workshops.

  20. Dual Method Use Among a Sample of First-Year College Women

    PubMed Central

    Walsh, Jennifer L.; Fielder, Robyn L.; Carey, Kate B.; Carey, Michael P.

    2014-01-01

    CONTEXT Dual method use—using one type of contraceptive to reduce the risk of STDs and another to prevent pregnancy—is effective but understudied. No prior studies have employed an event-level approach to examining characteristics associated with dual method use among college women. METHODS In 12 consecutive monthly surveys conducted in 2009–2010, data on 1,843 vaginal intercourse events were collected from 296 first-year college women. Women reported on their use of condoms and hormonal contraceptives during all events. Multilevel regression analysis was used to assess associations between event-, month- and person-level characteristics and hormonal use and dual method use. RESULTS Women used hormonal contraceptives during 53% of events and condoms during 63%. Dual method use was reported 28% of the time, and only 14% of participants were consistent users of both methods. The likelihood of dual method use was elevated when sex partners were friends as opposed to romantic partners or ex-boyfriends, and among women who had received an STD diagnosis prior to college (odds ratios, 2.5–2.9); it also increased with level of religiosity (coefficient, 0.8). Dual use was less likely when less reliable methods were used (odds ratio, 0.2) and when women reported more months of hormonal use (0.8), were older (coefficient, −4.7) and had had a greater number of partners before college (−0.3). CONCLUSIONS A better understanding of the characteristics associated with dual method use may help in the design of potential intervention efforts. PMID:24684480

  1. Severe hyperkalemia can be detected immediately by quantitative electrocardiography and clinical history in patients with symptomatic or extreme bradycardia: a retrospective cross-sectional study.

    PubMed

    Chon, Sung-Bin; Kwak, Young Ho; Hwang, Seung-Sik; Oh, Won Sup; Bae, Jun-Ho

    2013-12-01

    Detecting severe hyperkalemia is challenging. We explored its prevalence in symptomatic or extreme bradycardia and devised a diagnostic rule. This retrospective cross-sectional study included patients with symptomatic (heart rate [HR] ≤ 50/min with dyspnea, chest pain, altered mentality, dizziness/syncope/presyncope, general weakness, oliguria, or shock) or extreme (HR ≤ 40/min) bradycardia at an emergency department for 46 months. Risk factors for severe hyperkalemia were chosen by multiple logistic regression analysis from history (sex, age, comorbidities, and medications), vital signs, and electrocardiography (ECG; maximum precordial T-wave amplitude, PR, and QRS intervals). The derived diagnostic index was validated using bootstrapping method. Among the 169 participants enrolled, 87 (51.5%) were female. The mean (SD) age was 71.2 (12.5) years. Thirty-six (21.3%) had severe hyperkalemia. The diagnostic summed "maximum precordial T ≥ 8.5 mV (2)," "atrial fibrillation/junctional bradycardia (1)," "HR ≤ 42/min (1)," "diltiazem medication (2)," and "diabetes mellitus (1)." The C-statistics were 0.86 (0.80-0.93) and were validated. For scores of 4 or higher, sensitivity was 0.50, specificity was 0.92, and positive likelihood ratio was 6.02. The "ECG-only index," which sums the 3 ECG findings, had a sensitivity of 0.50, specificity of 0.90, and likelihood ratio (+) of 5.10 for scores of 3 or higher. Severe hyperkalemia is prevalent in symptomatic or extreme bradycardia and detectable by quantitative electrocardiographic parameters and history. © 2013.

  2. Minia, Egypt: Principal Component Analysis

    PubMed

    Abdelrehim, Marwa G; Mahfouz, Eman M; Ewis, Ashraf A; Seedhom, Amany E; Afifi, Hassan M; Shebl, Fatma M

    2018-02-26

    Background: Pancreatic cancer (PC) is a serious and rapidly progressing malignancy. Identifying risk factors including dietary elements is important to develop preventive strategies. This study focused on possible links between diet and PC. Methods: We conducted a case-control study including all PC patients diagnosed at Minia Cancer Center and controls from general population from June 2014 to December 2015. Dietary data were collected directly through personal interviews. Principal component analysis (PCA) was performed to identify dietary groups. The data were analyzed using crude odds ratios (ORs) and multivariable logistic regression with adjusted ORs and 95% confidence intervals (CIs). Results: A total of 75 cases and 149 controls were included in the study. PCA identified six dietary groups, labeled as cereals and grains, vegetables, proteins, dairy products, fruits, and sugars. Bivariate analysis showed that consumption of vegetables, fruits, sugars, and total energy intake were associated with change in PC risk. In multivariable-adjusted models comparing highest versus lowest levels of intake, we observed significant lower odds of PC in association with vegetable intake (OR 0.24; 95% CI, 0.07-0.85, P=0.012) and a higher likelihood with the total energy intake (OR 9.88; 95% CI, 2.56-38.09, P<0.0001). There was also a suggested link between high fruit consumption and reduced odds of PC. Conclusions: The study supports the association between dietary factors and the odds of PC development in Egypt. It was found that higher energy intake is associated with an increase in likelihood of PC, while increased vegetable consumption is associated with a lower odds ratio. Creative Commons Attribution License

  3. Diagnostic Capability of Spectral Domain Optical Coherence Tomography for Glaucoma

    PubMed Central

    Wu, Huijuan; de Boer, Johannes F.; Chen, Teresa C.

    2012-01-01

    Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (OCT) in glaucoma patients with visual field (VF) defects. Design Prospective, cross-sectional study. Methods Setting Participants were recruited from a university hospital clinic. Study Population One eye of 85 normal subjects and 61 glaucoma patients [with average VF mean deviation (MD) of -9.61 ± 8.76 dB] were randomly selected for the study. A subgroup of the glaucoma patients with early VF defects was calculated separately. Observation Procedures Spectralis OCT circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. Main Outcome Measures To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve (AROC), sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Results Overall RNFL thickness had the highest AROC value (0.952 for all patients, 0.895 for the early glaucoma subgroup). For all patients, the highest sensitivity (98.4%, CI 96.3-100%) was achieved by using two criteria: ≥1 RNFL sectors being abnormal at the < 5% level, and overall classification of borderline or outside normal limits, with specificities of 88.9% (CI 84.0-94.0%) and 87.1% (CI 81.6-92.5%) respectively for these two criteria. Conclusions Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral domain OCT were good for early perimetric glaucoma and excellent for moderately-advanced perimetric glaucoma. PMID:22265147

  4. Detection of scabies: A systematic review of diagnostic methods.

    PubMed

    Leung, Victor; Miller, Mark

    2011-01-01

    Accurate diagnosis of scabies infection is important for patient treatment and for public health control of scabies epidemics. To systematically review the accuracy and precision of history, physical examination and tests for diagnosing scabies. Using a structured search strategy, Medline and Embase databases were searched for English and French language articles that included a diagnosis of scabies. Studies comparing history, physical examination and/or any diagnostic tests with the reference standard of microscopic visualization of mites, eggs or fecal elements obtained from skin scrapings or biopsies were included for analysis. Data were extracted using standard criteria. History and examination of pruritic dermatoses failed to accurately diagnose scabies infection. Dermatoscopy by a trained practitioner has a positive likelihood ratio of 6.5 (95% CI 4.1 to 10.3) and a negative likelihood ratio of 0.1 (95% CI 0.06 to 0.2) for diagnosing scabies. The accuracy of other diagnostic tests could not be calculated from the data in the literature. In the face of such diagnostic inaccuracy, clinical judgment is still practical in diagnosing scabies. Two tests are used - the burrow ink test and handheld dermatoscopy. The burrow ink test is a simple, rapid, noninvasive test that can be used to screen a large number of patients. Handheld dermatoscopy is an accurate test, but requires special equipment and trained practitioners. Given the morbidity and costs of scabies infection, and that studies to date lack adequate internal and external validity, research to identify or develop accurate diagnostic tests for scabies infection is needed and justifiable.

  5. Interpretation of FTIR spectra of polymers and Raman spectra of car paints by means of likelihood ratio approach supported by wavelet transform for reducing data dimensionality.

    PubMed

    Martyna, Agnieszka; Michalska, Aleksandra; Zadora, Grzegorz

    2015-05-01

    The problem of interpretation of common provenance of the samples within the infrared spectra database of polypropylene samples from car body parts and plastic containers as well as Raman spectra databases of blue solid and metallic automotive paints was under investigation. The research involved statistical tools such as likelihood ratio (LR) approach for expressing the evidential value of observed similarities and differences in the recorded spectra. Since the LR models can be easily proposed for databases described by a few variables, research focused on the problem of spectra dimensionality reduction characterised by more than a thousand variables. The objective of the studies was to combine the chemometric tools easily dealing with multidimensionality with an LR approach. The final variables used for LR models' construction were derived from the discrete wavelet transform (DWT) as a data dimensionality reduction technique supported by methods for variance analysis and corresponded with chemical information, i.e. typical absorption bands for polypropylene and peaks associated with pigments present in the car paints. Univariate and multivariate LR models were proposed, aiming at obtaining more information about the chemical structure of the samples. Their performance was controlled by estimating the levels of false positive and false negative answers and using the empirical cross entropy approach. The results for most of the LR models were satisfactory and enabled solving the stated comparison problems. The results prove that the variables generated from DWT preserve signal characteristic, being a sparse representation of the original signal by keeping its shape and relevant chemical information.

  6. Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease

    PubMed Central

    das Virgens, Cláudio Marcelo Bittencourt; Lemos Jr, Laudenor; Noya-Rabelo, Márcia; Carvalhal, Manuela Campelo; Cerqueira Junior, Antônio Maurício dos Santos; Lopes, Fernanda Oliveira de Andrade; de Sá, Nicole Cruz; Suerdieck, Jéssica Gonzalez; de Souza, Thiago Menezes Barbosa; Correia, Vitor Calixto de Almeida; Sodré, Gabriella Sant'Ana; da Silva, André Barcelos; Alexandre, Felipe Kalil Beirão; Ferreira, Felipe Rodrigues Marques; Correia, Luís Cláudio Lemos

    2017-01-01

    AIM To test accuracy and reproducibility of gestalt to predict obstructive coronary artery disease (CAD) in patients with acute chest pain. METHODS We studied individuals who were consecutively admitted to our Chest Pain Unit. At admission, investigators performed a standardized interview and recorded 14 chest pain features. Based on these features, a cardiologist who was blind to other clinical characteristics made unstructured judgment of CAD probability, both numerically and categorically. As the reference standard for testing the accuracy of gestalt, angiography was required to rule-in CAD, while either angiography or non-invasive test could be used to rule-out. In order to assess reproducibility, a second cardiologist did the same procedure. RESULTS In a sample of 330 patients, the prevalence of obstructive CAD was 48%. Gestalt’s numerical probability was associated with CAD, but the area under the curve of 0.61 (95%CI: 0.55-0.67) indicated low level of accuracy. Accordingly, categorical definition of typical chest pain had a sensitivity of 48% (95%CI: 40%-55%) and specificity of 66% (95%CI: 59%-73%), yielding a negligible positive likelihood ratio of 1.4 (95%CI: 0.65-2.0) and negative likelihood ratio of 0.79 (95%CI: 0.62-1.02). Agreement between the two cardiologists was poor in the numerical classification (95% limits of agreement = -71% to 51%) and categorical definition of typical pain (Kappa = 0.29; 95%CI: 0.21-0.37). CONCLUSION Clinical judgment based on a combination of chest pain features is neither accurate nor reproducible in predicting obstructive CAD in the acute setting. PMID:28400920

  7. The Chandra Xbootes Survey - IV: Mid-Infrared and Submillimeter Counterparts

    NASA Astrophysics Data System (ADS)

    Brown, Arianna; Mitchell-Wynne, Ketron; Cooray, Asantha R.; Nayyeri, Hooshang

    2016-06-01

    In this work, we use a Bayesian technique to identify mid-IR and submillimeter counterparts for 3,213 X-ray point sources detected in the Chandra XBoötes Survey so as to characterize the relationship between black hole activity and star formation in the XBoötes region. The Chandra XBoötes Survey is a 5-ks X-ray survey of the 9.3 square degree Boötes Field of the NOAO Deep Wide-Field Survey (NDWFS), a survey imaged from the optical to the near-IR. We use a likelihood ratio analysis on Spitzer-IRAC data taken from The Spitzer Deep, Wide-Field Survey (SDWFS) to determine mid-IR counterparts, and a similar method on Herschel-SPIRE sources detected at 250µm from The Herschel Multi-tiered Extragalactic Survey to determine the submillimeter counterparts. The likelihood ratio analysis (LRA) provides the probability that a(n) IRAC or SPIRE point source is the true counterpart to a Chandra source. The analysis is comprised of three parts: the normalized magnitude distributions of counterparts and background sources, and the radial probability distribution of the separation distance between the IRAC or SPIRE source and the Chandra source. Many Chandra sources have multiple prospective counterparts in each band, so additional analysis is performed to determine the identification reliability of the candidates. Identification reliability values lie between 0 and 1, and sources with identification reliability values ≥0.8 are chosen to be the true counterparts. With these results, we will consider the statistical implications of the sample's redshifts, mid-IR and submillimeter luminosities, and star formation rates.

  8. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    PubMed

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  9. Case finding of lifestyle and mental health disorders in primary care: validation of the ‘CHAT’ tool

    PubMed Central

    Goodyear-Smith, Felicity; Coupe, Nicole M; Arroll, Bruce; Elley, C Raina; Sullivan, Sean; McGill, Anne-Thea

    2008-01-01

    Background Primary care is accessible and ideally placed for case finding of patients with lifestyle and mental health risk factors and subsequent intervention. The short self-administered Case-finding and Help Assessment Tool (CHAT) was developed for lifestyle and mental health assessment of adult patients in primary health care. This tool checks for tobacco use, alcohol and other drug misuse, problem gambling, depression, anxiety and stress, abuse, anger problems, inactivity, and eating disorders. It is well accepted by patients, GPs and nurses. Aim To assess criterion-based validity of CHAT against a composite gold standard. Design of study Conducted according to the Standards for Reporting of Diagnostic Accuracy statement for diagnostic tests. Setting Primary care practices in Auckland, New Zealand. Method One thousand consecutive adult patients completed CHAT and a composite gold standard. Sensitivities, specificities, positive and negative predictive values, and likelihood ratios were calculated. Results Response rates for each item ranged from 79.6 to 99.8%. CHAT was sensitive and specific for almost all issues screened, except exercise and eating disorders. Sensitivity ranged from 96% (95% confidence interval [CI] = 87 to 99%) for major depression to 26% (95% CI = 22 to 30%) for exercise. Specificity ranged from 97% (95% CI = 96 to 98%) for problem gambling and problem drug use to 40% (95% CI = 36 to 45%) for exercise. All had high likelihood ratios (3–30), except exercise and eating disorders. Conclusion CHAT is a valid and acceptable case-finding tool for most common lifestyle and mental health conditions. PMID:18186993

  10. Clinical history for diagnosis of dementia in men: Caerphilly Prospective Study

    PubMed Central

    Creavin, Sam; Fish, Mark; Gallacher, John; Bayer, Antony; Ben-Shlomo, Yoav

    2015-01-01

    Background Diagnosis of dementia often requires specialist referral and detailed, time-consuming assessments. Aim To investigate the utility of simple clinical items that non-specialist clinicians could use, in addition to routine practice, to diagnose all-cause dementia syndrome. Design and setting Cross-sectional diagnostic test accuracy study. Participants were identified from the electoral roll and general practice lists in Caerphilly and adjoining villages in South Wales, UK. Method Participants (1225 men aged 45–59 years) were screened for cognitive impairment using the Cambridge Cognitive Examination, CAMCOG, at phase 5 of the Caerphilly Prospective Study (CaPS). Index tests were a standardised clinical evaluation, neurological examination, and individual items on the Informant Questionnaire for Cognitive Disorders in the Elderly (IQCODE). Results Two-hundred and five men who screened positive (68%) and 45 (4.8%) who screened negative were seen, with 59 diagnosed with dementia. The model comprising problems with personal finance and planning had an area under the curve (AUC) of 0.92 (95% confidence interval [CI] = 0.86 to 0.97), positive likelihood ratio (LR+) of 23.7 (95% CI = 5.88 to 95.6), negative likelihood ratio (LR−) of 0.41 (95% CI = 0.27 to 0.62). The best single item for ruling out was no problems learning to use new gadgets (LR− of 0.22, 95% CI = 0.11 to 0.43). Conclusion This study found that three simple questions have high utility for diagnosing dementia in men who are cognitively screened. If confirmed, this could lead to less burdensome assessment where clinical assessment suggests possible dementia. PMID:26212844

  11. Multiple pathologies are common and related to dementia in the oldest-old

    PubMed Central

    Kim, Ronald C.; Sonnen, Joshua A.; Bullain, Szofia S.; Trieu, Thomas; Corrada, María M.

    2015-01-01

    Objective: The purpose of this study was to examine the role of multiple pathologies in the expression of dementia in the oldest-old. Methods: A total of 183 participants of The 90+ Study with longitudinal follow-up and autopsy were included in this clinical-pathologic investigation. Eight pathologic diagnoses (Alzheimer disease [AD], microinfarcts, hippocampal sclerosis, macroinfarcts, Lewy body disease, cerebral amyloid angiopathy, white matter disease, and others) were dichotomized. We estimated the odds of dementia in relation to each individual pathologic diagnosis and to the total number of diagnoses. We also examined dementia severity in relation to number of pathologic diagnoses. Results: The presence of multiple pathologic diagnoses was common and occurred more frequently in those with dementia compared with those without dementia (45% vs 14%). Higher numbers of pathologic diagnoses were also associated with greater dementia severity. Participants with intermediate/high AD pathology alone were 3 times more likely to have dementia (odds ratio = 3.5), but those with single non-AD pathologies were 12 times more likely to have dementia (odds ratio = 12.4). When a second pathology was present, the likelihood of dementia increased 4-fold in those with intermediate/high AD pathology but did not change in those with non-AD pathologies, suggesting that pathologies may interrelate in different ways. Conclusions: In the oldest-old, the presence of multiple pathologies is associated with increased likelihood and severity of dementia. The effect of the individual pathologies may be additive or perhaps synergistic and requires further research. Multiple pathologies will need to be targeted to reduce the burden of dementia in the population. PMID:26180144

  12. A likelihood ratio-based method to predict exact pedigrees for complex families from next-generation sequencing data.

    PubMed

    Heinrich, Verena; Kamphans, Tom; Mundlos, Stefan; Robinson, Peter N; Krawitz, Peter M

    2017-01-01

    Next generation sequencing technology considerably changed the way we screen for pathogenic mutations in rare Mendelian disorders. However, the identification of the disease-causing mutation amongst thousands of variants of partly unknown relevance is still challenging and efficient techniques that reduce the genomic search space play a decisive role. Often segregation- or linkage analysis are used to prioritize candidates, however, these approaches require correct information about the degree of relationship among the sequenced samples. For quality assurance an automated control of pedigree structures and sample assignment is therefore highly desirable in order to detect label mix-ups that might otherwise corrupt downstream analysis. We developed an algorithm based on likelihood ratios that discriminates between different classes of relationship for an arbitrary number of genotyped samples. By identifying the most likely class we are able to reconstruct entire pedigrees iteratively, even for highly consanguineous families. We tested our approach on exome data of different sequencing studies and achieved high precision for all pedigree predictions. By analyzing the precision for varying degrees of relatedness or inbreeding we could show that a prediction is robust down to magnitudes of a few hundred loci. A java standalone application that computes the relationships between multiple samples as well as a Rscript that visualizes the pedigree information is available for download as well as a web service at www.gene-talk.de CONTACT: heinrich@molgen.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. A likelihood ratio-based method to predict exact pedigrees for complex families from next-generation sequencing data

    PubMed Central

    Kamphans, Tom; Mundlos, Stefan; Robinson, Peter N.; Krawitz, Peter M.

    2017-01-01

    Motivation: Next generation sequencing technology considerably changed the way we screen for pathogenic mutations in rare Mendelian disorders. However, the identification of the disease-causing mutation amongst thousands of variants of partly unknown relevance is still challenging and efficient techniques that reduce the genomic search space play a decisive role. Often segregation- or linkage analysis are used to prioritize candidates, however, these approaches require correct information about the degree of relationship among the sequenced samples. For quality assurance an automated control of pedigree structures and sample assignment is therefore highly desirable in order to detect label mix-ups that might otherwise corrupt downstream analysis. Results: We developed an algorithm based on likelihood ratios that discriminates between different classes of relationship for an arbitrary number of genotyped samples. By identifying the most likely class we are able to reconstruct entire pedigrees iteratively, even for highly consanguineous families. We tested our approach on exome data of different sequencing studies and achieved high precision for all pedigree predictions. By analyzing the precision for varying degrees of relatedness or inbreeding we could show that a prediction is robust down to magnitudes of a few hundred loci. Availability and Implementation: A java standalone application that computes the relationships between multiple samples as well as a Rscript that visualizes the pedigree information is available for download as well as a web service at www.gene-talk.de. Contact: heinrich@molgen.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27565584

  14. Testing the Potential of Vegetation Indices for Land Use/cover Classification Using High Resolution Data

    NASA Astrophysics Data System (ADS)

    Karakacan Kuzucu, A.; Bektas Balcik, F.

    2017-11-01

    Accurate and reliable land use/land cover (LULC) information obtained by remote sensing technology is necessary in many applications such as environmental monitoring, agricultural management, urban planning, hydrological applications, soil management, vegetation condition study and suitability analysis. But this information still remains a challenge especially in heterogeneous landscapes covering urban and rural areas due to spectrally similar LULC features. In parallel with technological developments, supplementary data such as satellite-derived spectral indices have begun to be used as additional bands in classification to produce data with high accuracy. The aim of this research is to test the potential of spectral vegetation indices combination with supervised classification methods and to extract reliable LULC information from SPOT 7 multispectral imagery. The Normalized Difference Vegetation Index (NDVI), the Ratio Vegetation Index (RATIO), the Soil Adjusted Vegetation Index (SAVI) were the three vegetation indices used in this study. The classical maximum likelihood classifier (MLC) and support vector machine (SVM) algorithm were applied to classify SPOT 7 image. Catalca is selected region located in the north west of the Istanbul in Turkey, which has complex landscape covering artificial surface, forest and natural area, agricultural field, quarry/mining area, pasture/scrubland and water body. Accuracy assessment of all classified images was performed through overall accuracy and kappa coefficient. The results indicated that the incorporation of these three different vegetation indices decrease the classification accuracy for the MLC and SVM classification. In addition, the maximum likelihood classification slightly outperformed the support vector machine classification approach in both overall accuracy and kappa statistics.

  15. Relationship between lifestyle choices and hyperuricemia in Chinese men and women.

    PubMed

    Liu, Li; Lou, Shanshan; Xu, Ke; Meng, Zhaowei; Zhang, Qing; Song, Kun

    2013-02-01

    We aimed to explore correlations between lifestyle choices and hyperuricemia in a large Chinese population, emphasizing the differences from opposite sex. Ten thousand four hundred fifty subjects were randomly recruited from Tianjin municipality in China. Hyperuricemia was defined as serum uric acid >420 μmol/L for men and >360 μmol/L for women. Demographic data, highest education degree, work type, commuting means, smoking and drinking status, exercise frequency, and quantitative assessments of dietary factors were collected. Anthropometric measurements and fasting blood tests were performed. Statistical analyses were conducted. Total hyperuricemic prevalence was 12.89 %, with male significantly higher than female. Body mass index, waist circumference, serum indices, and age displayed high correlation coefficients, and most lifestyle factors also showed significant correlations as well. Binary logistic regression models showed odds ratio of developing hyperuricemia were much greater in males than in females by eating habits. However, physical activity-related lifestyle choices tended to cast much greater influences on the likelihood of hyperuricemia in females. Lifestyle choices and hyperuricemia are closely related. For males, eating habits have greater influences on the likelihood of developing hyperuricemia. For females, lifestyle factors like work type, commuting method, and exercise have such effects.

  16. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    PubMed Central

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  17. Handling nonresponse in surveys: analytic corrections compared with converting nonresponders.

    PubMed

    Jenkins, Paul; Earle-Richardson, Giulia; Burdick, Patrick; May, John

    2008-02-01

    A large health survey was combined with a simulation study to contrast the reduction in bias achieved by double sampling versus two weighting methods based on propensity scores. The survey used a census of one New York county and double sampling in six others. Propensity scores were modeled as a logistic function of demographic variables and were used in conjunction with a random uniform variate to simulate response in the census. These data were used to estimate the prevalence of chronic disease in a population whose parameters were defined as values from the census. Significant (p < 0.0001) predictors in the logistic function included multiple (vs. single) occupancy (odds ratio (OR) = 1.3), bank card ownership (OR = 2.1), gender (OR = 1.5), home ownership (OR = 1.3), head of household's age (OR = 1.4), and income >$18,000 (OR = 0.8). The model likelihood ratio chi-square was significant (p < 0.0001), with the area under the receiver operating characteristic curve = 0.59. Double-sampling estimates were marginally closer to population values than those from either weighting method. However, the variance was also greater (p < 0.01). The reduction in bias for point estimation from double sampling may be more than offset by the increased variance associated with this method.

  18. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  19. Validity of height loss as a predictor for prevalent vertebral fractures, low bone mineral density, and vitamin D deficiency.

    PubMed

    Mikula, A L; Hetzel, S J; Binkley, N; Anderson, P A

    2017-05-01

    Many osteoporosis-related vertebral fractures are unappreciated but their detection is important as their presence increases future fracture risk. We found height loss is a useful tool in detecting patients with vertebral fractures, low bone mineral density, and vitamin D deficiency which may lead to improvements in patient care. This study aimed to determine if/how height loss can be used to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency. A hospital database search in which four patient groups including those with a diagnosis of osteoporosis-related vertebral fracture, osteoporosis, osteopenia, or vitamin D deficiency and a control group were evaluated for chart-documented height loss over an average 3 1/2 to 4-year time period. Data was retrieved from 66,021 patients (25,792 men and 40,229 women). A height loss of 1, 2, 3, and 4 cm had a sensitivity of 42, 32, 19, and 14% in detecting vertebral fractures, respectively. Positive likelihood ratios for detecting vertebral fractures were 1.73, 2.35, and 2.89 at 2, 3, and 4 cm of height loss, respectively. Height loss had lower sensitivities and positive likelihood ratios for detecting low bone mineral density and vitamin D deficiency compared to vertebral fractures. Specificity of 1, 2, 3, and 4 cm of height loss was 70, 82, 92, and 95%, respectively. The odds ratios for a patient who loses 1 cm of height being in one of the four diagnostic groups compared to a patient who loses no height was higher for younger and male patients. This study demonstrated that prospective height loss is an effective tool to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency although a lack of height loss does not rule out these diagnoses. If significant height loss is present, the high positive likelihood ratios support a further workup.

  20. The Influence of Tag Presence on the Mortality of Juvenile Chinook Salmon Exposed to Simulated Hydroturbine Passage: Implications for Survival Estimates and Management of Hydroelectric Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, Thomas J.; Brown, Richard S.; Stephenson, John R.

    Each year, millions of fish have telemetry tags (acoustic, radio, inductive) surgically implanted to assess their passage and survival through hydropower facilities. One route of passage of particular concern is through hydro turbines, in which fish may be exposed to a range of potential injuries, including barotraumas from rapid decompression. The change in pressure from acclimation to exposure (nadir) has been found to be an important factor in predicting the likelihood of mortality and injury for juvenile Chinook salmon undergoing rapid decompression associated with simulated turbine passage. The presence of telemetry tags has also been shown to influence the likelihoodmore » of injury and mortality for juvenile Chinook salmon. This research investigated the likelihood of mortality and injury for juvenile Chinook salmon carrying telemetry tags and exposed to a range of simulated turbine passage. Several factors were examined as predictors of mortal injury for fish undergoing rapid decompression, and the ratio of pressure change and tag burden were determined to be the most predictive factors. As the ratio of pressure change and tag burden increase, the likelihood of mortal injury also increases. The results of this study suggest that previous survival estimates of juvenile Chinook salmon passing through hydro turbines may have been biased due to the presence of telemetry tags, and this has direct implications to the management of hydroelectric facilities. Realistic examples indicate how the bias in turbine passage survival estimates could be 20% or higher, depending on the mass of the implanted tags and the ratio of acclimation to exposure pressures. Bias would increase as the tag burden and pressure ratio increase, and have direct implications on survival estimates. It is recommended that future survival studies use the smallest telemetry tags possible to minimize the potential bias that may be associated with carrying the tag.« less

  1. Severity of Carpal Tunnel Syndrome and Diagnostic Accuracy of Hand and Body Anthropometric Measures

    PubMed Central

    Mondelli, Mauro; Farioli, Andrea; Mattioli, Stefano; Aretini, Alessandro; Ginanneschi, Federica; Greco, Giuseppe; Curti, Stefania

    2016-01-01

    Objective To study the diagnostic properties of hand/wrist and body measures according to validated clinical and electrophysiological carpal tunnel syndrome (CTS) severity scales. Methods We performed a prospective case-control study. For each case, two controls were enrolled. Two five-stage clinical and electrophysiological scales were used to evaluate CTS severity. Anthropometric measurements were collected and obesity indicators and hand/wrist ratios were calculated. Area under the receiver operating characteristic curves (AUC), sensitivity, specificity, and likelihood ratios were calculated separately by gender. Results We consecutively enrolled 370 cases and 747 controls. The wrist-palm ratio, waist-hip-height ratio and waist-stature ratio showed the highest proportion of cases with abnormal values in the severe stages of CTS for clinical and electrophysiological severity scales in both genders. Accuracy tended to increase with CTS severity for females and males. In severe stage, most of the indexes presented moderate accuracy in both genders. Among subjects with severe CTS, the wrist-palm ratio presented the highest AUC for hand measures in the clinical and electrophysiological severity scales both in females (AUC 0.83 and 0.76, respectively) and males (AUC 0.91 and 0.82, respectively). Among subjects with severe CTS, the waist-stature ratio showed the highest AUC for body measures in the clinical and electrophysiological severity scales both in females (AUC 0.78 and 0.77, respectively) and males (AUC 0.84 and 0.76, respectively). The results of waist-hip-height ratio AUC were similar. Conclusions Wrist-palm ratio, waist-hip-height ratio and waist-stature ratio could contribute to support the diagnostic hypothesis of severe CTS that however has to be confirmed by nerve conduction study. PMID:27768728

  2. A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits

    PubMed Central

    Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling

    2013-01-01

    Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762

  3. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  4. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  5. Case complexity scores in congenital heart surgery: a comparative study of the Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system.

    PubMed

    Al-Radi, Osman O; Harrell, Frank E; Caldarone, Christopher A; McCrindle, Brian W; Jacobs, Jeffrey P; Williams, M Gail; Van Arsdell, Glen S; Williams, William G

    2007-04-01

    The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems. Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used. After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001). The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.

  6. Updated logistic regression equations for the calculation of post-fire debris-flow likelihood in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2016-06-30

    Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.

  7. Challenges in Species Tree Estimation Under the Multispecies Coalescent Model

    PubMed Central

    Xu, Bo; Yang, Ziheng

    2016-01-01

    The multispecies coalescent (MSC) model has emerged as a powerful framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. A number of methods have been developed in the past few years to estimate the species tree under the MSC. The full likelihood methods (including maximum likelihood and Bayesian inference) average over the unknown gene trees and accommodate their uncertainties properly but involve intensive computation. The approximate or summary coalescent methods are computationally fast and are applicable to genomic datasets with thousands of loci, but do not make an efficient use of information in the multilocus data. Most of them take the two-step approach of reconstructing the gene trees for multiple loci by phylogenetic methods and then treating the estimated gene trees as observed data, without accounting for their uncertainties appropriately. In this article we review the statistical nature of the species tree estimation problem under the MSC, and explore the conceptual issues and challenges of species tree estimation by focusing mainly on simple cases of three or four closely related species. We use mathematical analysis and computer simulation to demonstrate that large differences in statistical performance may exist between the two classes of methods. We illustrate that several counterintuitive behaviors may occur with the summary methods but they are due to inefficient use of information in the data by summary methods and vanish when the data are analyzed using full-likelihood methods. These include (i) unidentifiability of parameters in the model, (ii) inconsistency in the so-called anomaly zone, (iii) singularity on the likelihood surface, and (iv) deterioration of performance upon addition of more data. We discuss the challenges and strategies of species tree inference for distantly related species when the molecular clock is violated, and highlight the need for improving the computational efficiency and model realism of the likelihood methods as well as the statistical efficiency of the summary methods. PMID:27927902

  8. Parameter estimation of history-dependent leaky integrate-and-fire neurons using maximum-likelihood methods

    PubMed Central

    Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst

    2012-01-01

    When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282

  9. Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup

    DTIC Science & Technology

    2011-02-25

    Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1

  10. Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.

    PubMed

    DuVall, Scott L; Kerber, Richard A; Thomas, Alun

    2010-02-01

    Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.

  11. Detection methods for non-Gaussian gravitational wave stochastic backgrounds

    NASA Astrophysics Data System (ADS)

    Drasco, Steve; Flanagan, Éanna É.

    2003-04-01

    A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.

  12. Stochastic multicomponent reactive transport analysis of low quality drainage release from waste rock piles: Controls of the spatial distribution of acid generating and neutralizing minerals.

    PubMed

    Pedretti, Daniele; Mayer, K Ulrich; Beckie, Roger D

    2017-06-01

    In mining environmental applications, it is important to assess water quality from waste rock piles (WRPs) and estimate the likelihood of acid rock drainage (ARD) over time. The mineralogical heterogeneity of WRPs is a source of uncertainty in this assessment, undermining the reliability of traditional bulk indicators used in the industry. We focused in this work on the bulk neutralizing potential ratio (NPR), which is defined as the ratio of the content of non-acid-generating minerals (typically reactive carbonates such as calcite) to the content of potentially acid-generating minerals (typically sulfides such as pyrite). We used a streamtube-based Monte-Carlo method to show why and to what extent bulk NPR can be a poor indicator of ARD occurrence. We simulated ensembles of WRPs identical in their geometry and bulk NPR, which only differed in their initial distribution of the acid generating and acid neutralizing minerals that control NPR. All models simulated the same principal acid-producing, acid-neutralizing and secondary mineral forming processes. We show that small differences in the distribution of local NPR values or the number of flow paths that generate acidity strongly influence drainage pH. The results indicate that the likelihood of ARD (epitomized by the probability of occurrence of pH<4 in a mixing boundary) within the first 100years can be as high as 75% for a NPR=2 and 40% for NPR=4. The latter is traditionally considered as a "universally safe" threshold to ensure non-acidic waters in practical applications. Our results suggest that new methods that explicitly account for mineralogical heterogeneity must be sought when computing effective (upscaled) NPR values at the scale of the piles. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Diagnostic accuracy of intraocular pressure measurement for the detection of raised intracranial pressure: meta-analysis: a systematic review.

    PubMed

    Yavin, Daniel; Luu, Judy; James, Matthew T; Roberts, Derek J; Sutherland, Garnette R; Jette, Nathalie; Wiebe, Samuel

    2014-09-01

    Because clinical examination and imaging may be unreliable indicators of intracranial hypertension, intraocular pressure (IOP) measurement has been proposed as a noninvasive method of diagnosis. The authors conducted a systematic review and meta-analysis to determine the correlation between IOP and intracranial pressure (ICP) and the diagnostic accuracy of IOP measurement for detection of intracranial hypertension. The authors searched bibliographic databases (Ovid MEDLINE, Ovid EMBASE, and the Cochrane Central Register of Controlled Trials) from 1950 to March 2013, references of included studies, and conference abstracts for studies comparing IOP and invasive ICP measurement. Two independent reviewers screened abstracts, reviewed full-text articles, and extracted data. Correlation coefficients, sensitivity, specificity, and positive and negative likelihood ratios were calculated using DerSimonian and Laird methods and bivariate random effects models. The I(2) statistic was used as a measure of heterogeneity. Among 355 identified citations, 12 studies that enrolled 546 patients were included in the meta-analysis. The pooled correlation coefficient between IOP and ICP was 0.44 (95% CI 0.26-0.63, I(2) = 97.7%, p < 0.001). The summary sensitivity and specificity for IOP for diagnosing intracranial hypertension were 81% (95% CI 26%-98%, I(2) = 95.2%, p < 0.01) and 95% (95% CI 43%-100%, I(2) = 97.7%, p < 0.01), respectively. The summary positive and negative likelihood ratios were 14.8 (95% CI 0.5-417.7) and 0.2 (95% CI 0.02-1.7), respectively. When ICP and IOP measurements were taken within 1 hour of another, correlation between the measures improved. Although a modest aggregate correlation was found between IOP and ICP, the pooled diagnostic accuracy suggests that IOP measurement may be of clinical utility in the detection of intracranial hypertension. Given the significant heterogeneity between included studies, further investigation is required prior to the adoption of IOP in the evaluation of intracranial hypertension into routine practice.

  14. Evaluation of direct and indirect ethanol biomarkers using a likelihood ratio approach to identify chronic alcohol abusers for forensic purposes.

    PubMed

    Alladio, Eugenio; Martyna, Agnieszka; Salomone, Alberto; Pirro, Valentina; Vincenti, Marco; Zadora, Grzegorz

    2017-02-01

    The detection of direct ethanol metabolites, such as ethyl glucuronide (EtG) and fatty acid ethyl esters (FAEEs), in scalp hair is considered the optimal strategy to effectively recognize chronic alcohol misuses by means of specific cut-offs suggested by the Society of Hair Testing. However, several factors (e.g. hair treatments) may alter the correlation between alcohol intake and biomarkers concentrations, possibly introducing bias in the interpretative process and conclusions. 125 subjects with various drinking habits were subjected to blood and hair sampling to determine indirect (e.g. CDT) and direct alcohol biomarkers. The overall data were investigated using several multivariate statistical methods. A likelihood ratio (LR) approach was used for the first time to provide predictive models for the diagnosis of alcohol abuse, based on different combinations of direct and indirect alcohol biomarkers. LR strategies provide a more robust outcome than the plain comparison with cut-off values, where tiny changes in the analytical results can lead to dramatic divergence in the way they are interpreted. An LR model combining EtG and FAEEs hair concentrations proved to discriminate non-chronic from chronic consumers with ideal correct classification rates, whereas the contribution of indirect biomarkers proved to be negligible. Optimal results were observed using a novel approach that associates LR methods with multivariate statistics. In particular, the combination of LR approach with either Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA) proved successful in discriminating chronic from non-chronic alcohol drinkers. These LR models were subsequently tested on an independent dataset of 43 individuals, which confirmed their high efficiency. These models proved to be less prone to bias than EtG and FAEEs independently considered. In conclusion, LR models may represent an efficient strategy to sustain the diagnosis of chronic alcohol consumption and provide a suitable gradation to support the judgment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Statistics, Handle with Care: Detecting Multiple Model Components with the Likelihood Ratio Test

    NASA Astrophysics Data System (ADS)

    Protassov, Rostislav; van Dyk, David A.; Connors, Alanna; Kashyap, Vinay L.; Siemiginowska, Aneta

    2002-05-01

    The likelihood ratio test (LRT) and the related F-test, popularized in astrophysics by Eadie and coworkers in 1971, Bevington in 1969, Lampton, Margon, & Bowyer, in 1976, Cash in 1979, and Avni in 1978, do not (even asymptotically) adhere to their nominal χ2 and F-distributions in many statistical tests common in astrophysics, thereby casting many marginal line or source detections and nondetections into doubt. Although the above authors illustrate the many legitimate uses of these statistics, in some important cases it can be impossible to compute the correct false positive rate. For example, it has become common practice to use the LRT or the F-test to detect a line in a spectral model or a source above background despite the lack of certain required regularity conditions. (These applications were not originally suggested by Cash or by Bevington.) In these and other settings that involve testing a hypothesis that is on the boundary of the parameter space, contrary to common practice, the nominal χ2 distribution for the LRT or the F-distribution for the F-test should not be used. In this paper, we characterize an important class of problems in which the LRT and the F-test fail and illustrate this nonstandard behavior. We briefly sketch several possible acceptable alternatives, focusing on Bayesian posterior predictive probability values. We present this method in some detail since it is a simple, robust, and intuitive approach. This alternative method is illustrated using the gamma-ray burst of 1997 May 8 (GRB 970508) to investigate the presence of an Fe K emission line during the initial phase of the observation. There are many legitimate uses of the LRT and the F-test in astrophysics, and even when these tests are inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). Nevertheless, there are numerous cases of the inappropriate use of the LRT and similar tests in the literature, bringing substantive scientific results into question.

  16. A comparison of the prognostic value of preoperative inflammation-based scores and TNM stage in patients with gastric cancer

    PubMed Central

    Pan, Qun-Xiong; Su, Zi-Jian; Zhang, Jian-Hua; Wang, Chong-Ren; Ke, Shao-Ying

    2015-01-01

    Background People’s Republic of China is one of the countries with the highest incidence of gastric cancer, accounting for 45% of all new gastric cancer cases in the world. Therefore, strong prognostic markers are critical for the diagnosis and survival of Chinese patients suffering from gastric cancer. Recent studies have begun to unravel the mechanisms linking the host inflammatory response to tumor growth, invasion and metastasis in gastric cancers. Based on this relationship between inflammation and cancer progression, several inflammation-based scores have been demonstrated to have prognostic value in many types of malignant solid tumors. Objective To compare the prognostic value of inflammation-based prognostic scores and tumor node metastasis (TNM) stage in patients undergoing gastric cancer resection. Methods The inflammation-based prognostic scores were calculated for 207 patients with gastric cancer who underwent surgery. Glasgow prognostic score (GPS), neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR), prognostic nutritional index (PNI), and prognostic index (PI) were analyzed. Linear trend chi-square test, likelihood ratio chi-square test, and receiver operating characteristic were performed to compare the prognostic value of the selected scores and TNM stage. Results In univariate analysis, preoperative serum C-reactive protein (P<0.001), serum albumin (P<0.001), GPS (P<0.001), PLR (P=0.002), NLR (P<0.001), PI (P<0.001), PNI (P<0.001), and TNM stage (P<0.001) were significantly associated with both overall survival and disease-free survival of patients with gastric cancer. In multivariate analysis, GPS (P=0.024), NLR (P=0.012), PI (P=0.001), TNM stage (P<0.001), and degree of differentiation (P=0.002) were independent predictors of gastric cancer survival. GPS and TNM stage had a comparable prognostic value and higher linear trend chi-square value, likelihood ratio chi-square value, and larger area under the receiver operating characteristic curve as compared to other inflammation-based prognostic scores. Conclusion The present study indicates that preoperative GPS and TNM stage are robust predictors of gastric cancer survival as compared to NLR, PLR, PI, and PNI in patients undergoing tumor resection. PMID:26124667

  17. Atmospheric neutrino observations in the MINOS far detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapman, John Derek

    2007-09-01

    This thesis presents the results of atmospheric neutrino observations from a 12.23 ktyr exposure of the 5.42 kt MINOS Far Detector between 1st August 2003 until 1st March 2006. The separation of atmospheric neutrino events from the large background of cosmic muon events is discussed. A total of 277 candidate contained vertex v/more » $$\\bar{v}$$ μ CC data events are observed, with an expectation of 354.4±47.4 events in the absence of neutrino oscillations. A total of 182 events have clearly identified directions, 77 data events are identified as upward going, 105 data events are identified as downward going. The ratio between the measured and expected up/down ratio is: R$$data\\atop{u/d}$$/R$$MC\\atop{u/d}$$ = 0.72$$+0.13\\atop{-0.11}$$(stat.)± 0.04 (sys.). This is 2.1σ away from the expectation for no oscillations. A total of 167 data events have clearly identified charge, 112 are identified as v μ events, 55 are identified as $$\\bar{v}$$ μ events. This is the largest sample of charge-separated contained-vertex atmospheric neutrino interactions so far observed. The ratio between the measured and expected $$\\bar{v}$$ μ/v μ ratio is: R$$data\\atop{$$\\bar{v}$v}$/ R$$MC\\atop{$$\\bar{v}$v}$ = 0.93 $$+0.19\\atop{-0.15}$$ (stat.) ± 0.12 (sys.). This is consistent with v μ and $$\\bar{v}$$ μ having the same oscillation parameters. Bayesian methods were used to generate a log(L/E) value for each event. A maximum likelihood analysis is used to determine the allowed regions for the oscillation parameters Δm$$2\\atop{32}$$ and sin 22θ 23. The likelihood function uses the uncertainty in log(L/E) to bin events in order to extract as much information from the data as possible. This fit rejects the null oscillations hypothesis at the 98% confidence level. A fit to independent v μ and $$\\bar{v}$$ μ oscillation assuming maximal mixing for both is also performed. The projected sensitivity after an exposure of 25 ktyr is also discussed.« less

  18. Association Between Geographic Access to Cancer Care and Receipt of Radiation Therapy for Rectal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Chun Chieh, E-mail: anna.lin@cancer.org; Bruinooge, Suanna S.; Kirkwood, M. Kelsey

    Purpose: Trimodality therapy (chemoradiation and surgery) is the standard of care for stage II/III rectal cancer but nearly one third of patients do not receive radiation therapy (RT). We examined the relationship between the density of radiation oncologists and the travel distance to receipt of RT. Methods and Materials: A retrospective study based on the National Cancer Data Base identified 26,845 patients aged 18 to 80 years with stage II/III rectal cancer diagnosed from 2007 to 2010. Radiation oncologists were identified through the Physician Compare dataset. Generalized estimating equations clustering by hospital service area was used to examine the association betweenmore » geographic access and receipt of RT, controlling for patient sociodemographic and clinical characteristics. Results: Of the 26,845 patients, 70% received RT within 180 days of diagnosis or within 90 days of surgery. Compared with a travel distance of <12.5 miles, patients diagnosed at a reporting facility who traveled ≥50 miles had a decreased likelihood of receipt of RT (50-249 miles, adjusted odds ratio 0.75, P<.001; ≥250 miles, adjusted odds ratio 0.46; P=.002), all else being equal. The density level of radiation oncologists was not significantly associated with the receipt of RT. Patients who were female, nonwhite, and aged ≥50 years and had comorbidities were less likely to receive RT (P<.05). Patients who were uninsured but self-paid for their medical services, initially diagnosed elsewhere but treated at a reporting facility, and resided in Midwest had an increased the likelihood of receipt of RT (P<.05). Conclusions: An increased travel burden was associated with a decreased likelihood of receiving RT for patients with stage II/III rectal cancer, all else being equal; however, radiation oncologist density was not. Further research of geographic access and establishing transportation assistance programs or lodging services for patients with an unmet need might help decrease geographic barriers and improve the quality of rectal cancer care.« less

  19. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  20. The Determinants of Place of Death: An Evidence-Based Analysis

    PubMed Central

    Costa, V

    2014-01-01

    Background According to a conceptual model described in this analysis, place of death is determined by an interplay of factors associated with the illness, the individual, and the environment. Objectives Our objective was to evaluate the determinants of place of death for adult patients who have been diagnosed with an advanced, life-limiting condition and are not expected to stabilize or improve. Data Sources A literature search was performed using Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, for studies published from January 1, 2004, to September 24, 2013. Review Methods Different places of death are considered in this analysis—home, nursing home, inpatient hospice, and inpatient palliative care unit, compared with hospital. We selected factors to evaluate from a list of possible predictors—i.e., determinants—of death. We extracted the adjusted odds ratios and 95% confidence intervals of each determinant, performed a meta-analysis if appropriate, and conducted a stratified analysis if substantial heterogeneity was observed. Results From a literature search yielding 5,899 citations, we included 2 systematic reviews and 29 observational studies. Factors that increased the likelihood of home death included multidisciplinary home palliative care, patient preference, having an informal caregiver, and the caregiver's ability to cope. Factors increasing the likelihood of a nursing home death included the availability of palliative care in the nursing home and the existence of advance directives. A cancer diagnosis and the involvement of home care services increased the likelihood of dying in an inpatient palliative care unit. A cancer diagnosis and a longer time between referral to palliative care and death increased the likelihood of inpatient hospice death. The quality of the evidence was considered low. Limitations Our results are based on those of retrospective observational studies. Conclusions The results obtained were consistent with previously published systematic reviews. The analysis identified several factors that are associated with place of death. PMID:26351550

Top