Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A
2013-11-01
To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.
Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter
2005-05-17
Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Likelihood-Ratio DIF Testing: Effects of Nonnormality
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…
Sinharay, Sandip
2017-09-01
Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.
Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John
2013-06-01
The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi
2012-11-01
To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei
2012-01-01
Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
Validation of DNA-based identification software by computation of pedigree likelihood ratios.
Slooten, K
2011-08-01
Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Van Hoeyveld, Erna; Nickmans, Silvie; Ceuppens, Jan L; Bossuyt, Xavier
2015-10-23
Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was <0.03 for specific IgE <0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, >6.3 for specific IgE>0.7, and very high (∞) for specific IgE >3.5 kU/L. Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. Copyright © 2015 Elsevier B.V. All rights reserved.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
Using DNA fingerprints to infer familial relationships within NHANES III households
Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.
2009-01-01
Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713
Likelihood Ratio Tests for Special Rasch Models
ERIC Educational Resources Information Center
Hessen, David J.
2010-01-01
In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Likelihood Ratios for the Emergency Physician.
Peng, Paul; Coyle, Andrew
2018-04-26
The concept of likelihood ratios was introduced more than 40 years ago, yet this powerful metric has still not seen wider application or discussion in the medical decision-making process. There is concern that clinicians-in-training are still being taught an over-simplified approach to diagnostic test performance, and have limited exposure to likelihood ratios. Even for those familiar with likelihood ratios, they might perceive them as mathematically-cumbersome in application, if not difficult to determine for a particular disease process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
Newman, Phil; Adams, Roger; Waddington, Gordon
2012-09-01
To examine the relationship between two clinical test results and future diagnosis of (Medial Tibial Stress Syndrome) MTSS in personnel at a military trainee establishment. Data from a preparticipation musculoskeletal screening test performed on 384 Australian Defence Force Academy Officer Cadets were compared against 693 injuries reported by 326 of the Officer Cadets in the following 16 months. Data were held in an Injury Surveillance database and analysed using χ² and Fisher's Exact tests, and Receiver Operating Characteristic Curve analysis. Diagnosis of MTSS, confirmed by an independent blinded health practitioner. Both the palpation and oedema clinical tests were each found to be significant predictors for later onset of MTSS. Specifically: Shin palpation test OR 4.63, 95% CI 2.5 to 8.5, Positive Likelihood Ratio 3.38, Negative Likelihood Ratio 0.732, Pearson χ² p<0.001; Shin oedema test OR 76.1 95% CI 9.6 to 602.7, Positive Likelihood Ratio 7.26, Negative Likelihood Ratio 0.095, Fisher's Exact p<0.001; Combined Shin Palpation Test and Shin Oedema Test Positive Likelihood Ratio 7.94, Negative Likelihood Ratio <0.001, Fisher's Exact p<0.001. Female gender was found to be an independent risk factor (OR 2.97, 95% CI 1.66 to 5.31, Positive Likelihood Ratio 2.09, Negative Likelihood Ratio 0.703, Pearson χ² p<0.001) for developing MTSS. The tests for MTSS employed here are components of a normal clinical examination used to diagnose MTSS. This paper confirms that these tests and female gender can also be confidently applied in predicting those in an asymptomatic population who are at greater risk of developing MTSS symptoms with activity at some point in the future.
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
Order-restricted inference for means with missing values.
Wang, Heng; Zhong, Ping-Shou
2017-09-01
Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.
Measures of accuracy and performance of diagnostic tests.
Drobatz, Kenneth J
2009-05-01
Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
A LANDSAT study of ephemeral and perennial rangeland vegetation and soils
NASA Technical Reports Server (NTRS)
Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.
1976-01-01
The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics
2007-05-01
findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
NASA Astrophysics Data System (ADS)
Sembiring, J.; Jones, F.
2018-03-01
Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.
Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria
2009-02-01
A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.
Interpreting DNA mixtures with the presence of relatives.
Hu, Yue-Qing; Fung, Wing K
2003-02-01
The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Choosing relatives for DNA identification of missing persons.
Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit
2011-01-01
DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships. © 2010 American Academy of Forensic Sciences.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai
2016-02-01
The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.
ERIC Educational Resources Information Center
Levy, Roy
2010-01-01
SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Rodríguez-Escudero, Juan Pablo; López-Jiménez, Francisco; Trejo-Gutiérrez, Jorge F
2011-01-01
This article reviews different characteristics of validity in a clinical diagnostic test. In particular, we emphasize the likelihood ratio as an instrument that facilitates the use of epidemiologic concepts in clinical diagnosis.
PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.
Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan
2015-05-01
The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Prediction of hamstring injury in professional soccer players by isokinetic measurements
Dauty, Marc; Menu, Pierre; Fouasson-Chailloux, Alban; Ferréol, Sophie; Dubois, Charles
2016-01-01
Summary Objectives previous studies investigating the ability of isokinetic strength ratios to predict hamstring injuries in soccer players have reported conflicting results. Hypothesis to determine if isokinetic ratios are able to predict hamstring injury occurring during the season in professional soccer players. Study Design case-control study; Level of evidence: 3. Methods from 2001 to 2011, 350 isokinetic tests were performed in 136 professional soccer players at the beginning of the soccer season. Fifty-seven players suffered hamstring injury during the season that followed the isokinetic tests. These players were compared with the 79 uninjured players. The bilateral concentric ratio (hamstring-to-hamstring), ipsilateral concentric ratio (hamstring-to-quadriceps), and mixed ratio (eccentric/concentric hamstring-to-quadriceps) were studied. The predictive ability of each ratio was established based on the likelihood ratio and post-test probability. Results the mixed ratio (30 eccentric/240 concentric hamstring-to-quadriceps) <0.8, ipsilateral ratio (180 concentric hamstring-to-quadriceps) <0.47, and bilateral ratio (60 concentric hamstring-to-hamstring) <0.85 were the most predictive of hamstring injury. The ipsilateral ratio <0.47 allowed prediction of the severity of the hamstring injury, and was also influenced by the length of time since administration of the isokinetic tests. Conclusion isokinetic ratios are useful for predicting the likelihood of hamstring injury in professional soccer players during the competitive season. PMID:27331039
Masch, William R; Cohan, Richard H; Ellis, James H; Dillman, Jonathan R; Rubin, Jonathan M; Davenport, Matthew S
2016-02-01
The purpose of this study was to determine the clinical effectiveness of prospectively reported sonographic twinkling artifact for the diagnosis of renal calculus in patients without known urolithiasis. All ultrasound reports finalized in one health system from June 15, 2011, to June 14, 2014, that contained the words "twinkle" or "twinkling" in reference to suspected renal calculus were identified. Patients with known urolithiasis or lack of a suitable reference standard (unenhanced abdominal CT with ≤ 2.5-mm slice thickness performed ≤ 30 days after ultrasound) were excluded. The sensitivity, specificity, and positive likelihood ratio of sonographic twinkling artifact for the diagnosis of renal calculus were calculated by renal unit and stratified by two additional diagnostic features for calcification (echogenic focus, posterior acoustic shadowing). Eighty-five patients formed the study population. Isolated sonographic twinkling artifact had sensitivity of 0.78 (82/105), specificity of 0.40 (26/65), and a positive likelihood ratio of 1.30 for the diagnosis of renal calculus. Specificity and positive likelihood ratio improved and sensitivity declined when the following additional diagnostic features were present: sonographic twinkling artifact and echogenic focus (sensitivity, 0.61 [64/105]; specificity, 0.65 [42/65]; positive likelihood ratio, 1.72); sonographic twinkling artifact and posterior acoustic shadowing (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81); all three features (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81). Isolated sonographic twinkling artifact has a high false-positive rate (60%) for the diagnosis of renal calculus in patients without known urolithiasis.
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana
2017-09-01
The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Is it possible to predict office hysteroscopy failure?
Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola
2014-10-01
The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup
2011-02-25
Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1
Lead isotope ratios for bullets, forensic evaluation in a Bayesian paradigm.
Sjåstad, Knut-Endre; Lucy, David; Andersen, Tom
2016-01-01
Forensic science is a discipline concerned with collection, examination and evaluation of physical evidence related to criminal cases. The results from the activities of the forensic scientist may ultimately be presented to the court in such a way that the triers of fact understand the implications of the data. Forensic science has been, and still is, driven by development of new technology, and in the last two decades evaluation of evidence based on logical reasoning and Bayesian statistic has reached some level of general acceptance within the forensic community. Tracing of lead fragments of unknown origin to a given source of ammunition is a task that might be of interest for the Court. Use of data from lead isotope ratios analysis interpreted within a Bayesian framework has shown to be suitable method to guide the Court to draw their conclusion for such task. In this work we have used isotopic composition of lead from small arms projectiles (cal. .22) and developed an approach based on Bayesian statistics and likelihood ratio calculation. The likelihood ratio is a single quantity that provides a measure of the value of evidence that can be used in the deliberation of the court. Copyright © 2015 Elsevier B.V. All rights reserved.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
Li, Zhanzhan; Zhou, Qin; Li, Yanyan; Yan, Shipeng; Fu, Jun; Huang, Xinqiong; Shen, Liangfang
2017-02-28
We conducted a meta-analysis to evaluate the diagnostic values of mean cerebral blood volume for recurrent and radiation injury in glioma patients. We performed systematic electronic searches for eligible study up to August 8, 2016. Bivariate mixed effects models were used to estimate the combined sensitivity, specificity, positive likelihood ratios, negative likelihood ratios, diagnostic odds ratios and their 95% confidence intervals (CIs). Fifteen studies with a total number of 576 participants were enrolled. The pooled sensitivity and specificity of diagnostic were 0.88 (95%CI: 0.82-0.92) and 0.85 (95%CI: 0.68-0.93). The pooled positive likelihood ratio is 5.73 (95%CI: 2.56-12.81), negative likelihood ratio is 0.15 (95%CI: 0.10-0.22), and the diagnostic odds ratio is 39.34 (95%CI:13.96-110.84). The summary receiver operator characteristic is 0.91 (95%CI: 0.88-0.93). However, the Deek's plot suggested publication bias may exist (t=2.30, P=0.039). Mean cerebral blood volume measurement methods seems to be very sensitive and highly specific to differentiate recurrent and radiation injury in glioma patients. The results should be interpreted with caution because of the potential bias.
Detection of abrupt changes in dynamic systems
NASA Technical Reports Server (NTRS)
Willsky, A. S.
1984-01-01
Some of the basic ideas associated with the detection of abrupt changes in dynamic systems are presented. Multiple filter-based techniques and residual-based method and the multiple model and generalized likelihood ratio methods are considered. Issues such as the effect of unknown onset time on algorithm complexity and structure and robustness to model uncertainty are discussed.
Chaikriangkrai, Kongkiat; Jhun, Hye Yeon; Shantha, Ghanshyam Palamaner Subash; Abdulhak, Aref Bin; Tandon, Rudhir; Alqasrawi, Musab; Klappa, Anthony; Pancholy, Samir; Deshmukh, Abhishek; Bhama, Jay; Sigurdsson, Gardar
2018-07-01
In aortic stenosis patients referred for surgical and transcatheter aortic valve replacement (AVR), the evidence of diagnostic accuracy of coronary computed tomography angiography (CCTA) has been limited. The objective of this study was to investigate the diagnostic accuracy of CCTA for significant coronary artery disease (CAD) in patients referred for AVR using invasive coronary angiography (ICA) as the gold standard. We searched databases for all diagnostic studies of CCTA in patients referred for AVR, which reported diagnostic testing characteristics on patient-based analysis required to pool summary sensitivity, specificity, positive-likelihood ratio, and negative-likelihood ratio. Significant CAD in both CCTA and ICA was defined by >50% stenosis in any coronary artery, coronary stent, or bypass graft. Thirteen studies evaluated 1498 patients (mean age, 74 y; 47% men; 76% transcatheter AVR). The pooled prevalence of significant stenosis determined by ICA was 43%. Hierarchical summary receiver-operating characteristic analysis demonstrated a summary area under curve of 0.96. The pooled sensitivity, specificity, and positive-likelihood and negative-likelihood ratios of CCTA in identifying significant stenosis determined by ICA were 95%, 79%, 4.48, and 0.06, respectively. In subgroup analysis, the diagnostic profiles of CCTA were comparable between surgical and transcatheter AVR. Despite the higher prevalence of significant CAD in patients with aortic stenosis than with other valvular heart diseases, our meta-analysis has shown that CCTA has a suitable diagnostic accuracy profile as a gatekeeper test for ICA. Our study illustrates a need for further study of the potential role of CCTA in preoperative planning for AVR.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Ran, Li; Zhao, Wenli; Zhao, Ye; Bu, Huaien
2017-07-01
Contrast-enhanced ultrasound (CEUS) is considered a novel method for diagnosing pancreatic cancer, but currently, there is no conclusive evidence of its accuracy. Using CEUS in discriminating between pancreatic carcinoma and other pancreatic lesions, we aimed to evaluate the diagnostic accuracy of CEUS in predicting pancreatic tumours. Relevant studies were selected from the PubMed, Cochrane Library, Elsevier, CNKI, VIP, and WANFANG databases dating from January 2006 to May 2017. The following terms were used as keywords: "pancreatic cancer" OR "pancreatic carcinoma," "contrast-enhanced ultrasonography" OR "contrast-enhanced ultrasound" OR "CEUS," and "diagnosis." The selection criteria are as follows: pancreatic carcinomas diagnosed by CEUS while the main reference standard was surgical pathology or biopsy (if it involved a clinical diagnosis, particular criteria emphasized); SonoVue or Levovist was the contrast agent; true positive, false positive, false negative, and true negative rates were obtained or calculated to construct the 2 × 2 contingency table; English or Chinese articles; at least 20 patients were enrolled in each group. The Quality Assessment for Studies of Diagnostic Accuracy was employed to evaluate the quality of articles. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, summary receiver-operating characteristic curves, and the area under curve were evaluated to estimate the overall diagnostic efficiency. Pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated with fixed-effect models. Eight of 184 records were eligible for a meta-analysis after independent scrutinization by 2 reviewers. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were 0.86 (95% CI 0.81-0.90), 0.75 (95% CI 0.68-0.82), 3.56 (95% CI 2.64-4.78), 0.19 (95% CI 0.13-0.27), and 22.260 (95% CI 8.980-55.177), respectively. The area under the SROC curve was 0.9088. CEUS has a satisfying pooled sensitivity and specificity for discriminating pancreatic cancer from other pancreatic lesions.
De March, I; Sironi, E; Taroni, F
2016-09-01
Analysis of marks recovered from different crime scenes can be useful to detect a linkage between criminal cases, even though a putative source for the recovered traces is not available. This particular circumstance is often encountered in the early stage of investigations and thus, the evaluation of evidence association may provide useful information for the investigators. This association is evaluated here from a probabilistic point of view: a likelihood ratio based approach is suggested in order to quantify the strength of the evidence of trace association in the light of two mutually exclusive propositions, namely that the n traces come from a common source or from an unspecified number of sources. To deal with this kind of problem, probabilistic graphical models are used, in form of Bayesian networks and object-oriented Bayesian networks, allowing users to intuitively handle with uncertainty related to the inferential problem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, Lily Lee
1973-01-01
Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.
Ab initio solution of macromolecular crystal structures without direct methods.
McCoy, Airlie J; Oeffner, Robert D; Wrobel, Antoni G; Ojala, Juha R M; Tryggvason, Karl; Lohkamp, Bernhard; Read, Randy J
2017-04-04
The majority of macromolecular crystal structures are determined using the method of molecular replacement, in which known related structures are rotated and translated to provide an initial atomic model for the new structure. A theoretical understanding of the signal-to-noise ratio in likelihood-based molecular replacement searches has been developed to account for the influence of model quality and completeness, as well as the resolution of the diffraction data. Here we show that, contrary to current belief, molecular replacement need not be restricted to the use of models comprising a substantial fraction of the unknown structure. Instead, likelihood-based methods allow a continuum of applications depending predictably on the quality of the model and the resolution of the data. Unexpectedly, our understanding of the signal-to-noise ratio in molecular replacement leads to the finding that, with data to sufficiently high resolution, fragments as small as single atoms of elements usually found in proteins can yield ab initio solutions of macromolecular structures, including some that elude traditional direct methods.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-11-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-01-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182
Wang, Chi-Chuan; Lin, Chia-Hui; Lin, Kuan-Yin; Chuang, Yu-Chung; Sheng, Wang-Huei
2016-01-01
Abstract Community-acquired pneumonia (CAP) is a common but potentially life-threatening condition, but limited information exists on the effectiveness of fluoroquinolones compared to β-lactams in outpatient settings. We aimed to compare the effectiveness and outcomes of penicillins versus respiratory fluoroquinolones for CAP at outpatient clinics. This was a claim-based retrospective cohort study. Patients aged 20 years or older with at least 1 new pneumonia treatment episode were included, and the index penicillin or respiratory fluoroquinolone therapies for a pneumonia episode were at least 5 days in duration. The 2 groups were matched by propensity scores. Cox proportional hazard models were used to compare the rates of hospitalizations/emergence service visits and 30-day mortality. A logistic model was used to compare the likelihood of treatment failure between the 2 groups. After propensity score matching, 2622 matched pairs were included in the final model. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy (adjusted odds ratio [AOR], 0.88; 95% confidence interval [95%CI], 0.77–0.99), but no differences were found in hospitalization/emergence service (ES) visits (adjusted hazard ratio [HR], 1.27; 95% CI, 0.92–1.74) and 30-day mortality (adjusted HR, 0.69; 95% CI, 0.30–1.62) between the 2 groups. The likelihood of treatment failure of fluoroquinolone-based therapy was lower than that of penicillin-based therapy for CAP on an outpatient clinic basis. However, this effect may be marginal. Further investigation into the comparative effectiveness of these 2 treatment options is warranted. PMID:26871827
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert
2003-11-01
To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002
[Waist-to-height ratio is an indicator of metabolic risk in children].
Valle-Leal, Jaime; Abundis-Castro, Leticia; Hernández-Escareño, Juan; Flores-Rubio, Salvador
2016-01-01
Abdominal fat, particularly visceral, is associated with a high risk of metabolic complications. The waist-height ratio (WHtR) is used to assess abdominal fat in individuals of all ages. To determine the ability of the waist-to-height ratio to detect metabolic risk in mexican schoolchildren. A study was conducted on children between 6 and 12 years. Obesity was diagnosed as a body mass index (BMI) ≥ 85th percentile, and an ICE ≥0.5 was considered abdominal obesity. Blood levels of glucose, cholesterol and triglycerides were measured. The sensitivity, specificity, positive predictive and negative value, area under curve, the positive likelihood ratio and negative likelihood ratio of the WHtR and BMI were calculated in order to identify metabolic alterations. WHtR and BMI were compared to determine which had the best diagnostic efficiency. Of the 223 children included in the study, 51 had hypertriglyceridaemia, 27 with hypercholesterolaemia, and 9 with hyperglycaemia. On comparing the diagnostic efficiency of WHtR with that of BMI, there was a sensitivity of 100% vs. 56% for hyperglycaemia, 93 vs. 70% for cholesterol, and 76 vs. 59% for hypertriglyceridaemia. The specificity, negative predictive value, positive predictive value, positive likelihood ratio, negative likelihood ratio, and area under curve were also higher for WHtR. The WHtR is a more efficient indicator than BMI in identifying metabolic risk in mexican school-age. Copyright © 2015 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Ramsay-Curve Differential Item Functioning
ERIC Educational Resources Information Center
Woods, Carol M.
2011-01-01
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Stochastic Ordering Using the Latent Trait and the Sum Score in Polytomous IRT Models.
ERIC Educational Resources Information Center
Hemker, Bas T.; Sijtsma, Klaas; Molenaar, Ivo W.; Junker, Brian W.
1997-01-01
Stochastic ordering properties are investigated for a broad class of item response theory (IRT) models for which the monotone likelihood ratio does not hold. A taxonomy is given for nonparametric and parametric models for polytomous models based on the hierarchical relationship between the models. (SLD)
Logistic Approximation to the Normal: The KL Rationale
ERIC Educational Resources Information Center
Savalei, Victoria
2006-01-01
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…
Identifying common donors in DNA mixtures, with applications to database searches.
Slooten, K
2017-01-01
Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
ERIC Educational Resources Information Center
Yuan, Ke-Hai
2008-01-01
In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…
ERIC Educational Resources Information Center
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N.
2015-01-01
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Urabe, Naohisa; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-01-01
ABSTRACT We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. PMID:28330887
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.
1986-01-01
The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
IRT Model Selection Methods for Dichotomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.
2007-01-01
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Justin; Wolpert, David; Neil, Joshua
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
Grana, Justin; Wolpert, David; Neil, Joshua; ...
2016-03-11
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
Sanz-Barbero, Belén; Vives-Cases, Carmen; Otero-García, Laura; Muntaner, Carles; Torrubiano-Domínguez, Jordi; O'Campo, Patricia
2015-12-01
Intimate partner violence (IPV) against women is a complex worldwide public health problem. There is scarce research on the independent effect on IPV exerted by structural factors such as labour and economic policies, economic inequalities and gender inequality. To analyse the association, in Spain, between contextual variables of regional unemployment and income inequality and individual women's likelihood of IPV, independently of the women's characteristics. We conducted multilevel logistic regression to analyse cross-sectional data from the 2011 Spanish Macrosurvey of Gender-based Violence which included 7898 adult women. The first level of analyses was the individual women' characteristics and the second level was the region of residence. Of the survey participants, 12.2% reported lifetime IPV. The region of residence accounted for 3.5% of the total variability in IPV prevalence. We determined a direct association between regional male long-term unemployment and IPV likelihood (P = 0.007) and between the Gini Index for the regional income inequality and IPV likelihood (P < 0.001). Women residing in a region with higher gender-based income discrimination are at a lower likelihood of IPV than those residing in a region with low gender-based income discrimination (odds ratio = 0.64, 95% confidence intervals: 0.55-0.75). Growing regional unemployment rates and income inequalities increase women's likelihood of IPV. In times of economic downturn, like the current one in Spain, this association may translate into an increase in women's vulnerability to IPV. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D
2011-01-01
To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.
Inferring relationships between pairs of individuals from locus heterozygosities
Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E
2002-01-01
Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
Predicting In-State Workforce Retention After Graduate Medical Education Training.
Koehler, Tracy J; Goodfellow, Jaclyn; Davis, Alan T; Spybrook, Jessaca; vanSchagen, John E; Schuh, Lori
2017-02-01
There is a paucity of literature when it comes to identifying predictors of in-state retention of graduate medical education (GME) graduates, such as the demographic and educational characteristics of these physicians. The purpose was to use demographic and educational predictors to identify graduates from a single Michigan GME sponsoring institution, who are also likely to practice medicine in Michigan post-GME training. We included all residents and fellows who graduated between 2000 and 2014 from 1 of 18 GME programs at a Michigan-based sponsoring institution. Predictor variables identified by logistic regression with cross-validation were used to create a scoring tool to determine the likelihood of a GME graduate to practice medicine in the same state post-GME training. A 6-variable model, which included 714 observations, was identified. The predictor variables were birth state, program type (primary care versus non-primary care), undergraduate degree location, medical school location, state in which GME training was completed, and marital status. The positive likelihood ratio (+LR) for the scoring tool was 5.31, while the negative likelihood ratio (-LR) was 0.46, with an accuracy of 74%. The +LR indicates that the scoring tool was useful in predicting whether graduates who trained in a Michigan-based GME sponsoring institution were likely to practice medicine in Michigan following training. Other institutions could use these techniques to identify key information that could help pinpoint matriculating residents/fellows likely to practice medicine within the state in which they completed their training.
Urabe, Naohisa; Sakamoto, Susumu; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-06-01
We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. Copyright © 2017 American Society for Microbiology.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
NASA Astrophysics Data System (ADS)
Neuer, Marcus J.
2013-11-01
A technique for the spectral identification of strontium-90 is shown, utilising a Maximum-Likelihood deconvolution. Different deconvolution approaches are discussed and summarised. Based on the intensity distribution of the beta emission and Geant4 simulations, a combined response matrix is derived, tailored to the β- detection process in sodium iodide detectors. It includes scattering effects and attenuation by applying a base material decomposition extracted from Geant4 simulations with a CAD model for a realistic detector system. Inversion results of measurements show the agreement between deconvolution and reconstruction. A detailed investigation with additional masking sources like 40K, 226Ra and 131I shows that a contamination of strontium can be found in the presence of these nuisance sources. Identification algorithms for strontium are presented based on the derived technique. For the implementation of blind identification, an exemplary masking ratio is calculated.
Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter
NASA Astrophysics Data System (ADS)
Murphy, T.; Holzinger, M.
2016-09-01
Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Greer, Joy A; Zelig, Craig M; Choi, Kenny K; Rankins, Nicole Calloway; Chauhan, Suneet P; Magann, Everett F
2012-08-01
To compare the likelihood of being within weight standards before and after pregnancy between United States Marine Corps (USMC) and Navy (USN) active duty women (ADW). ADW with singleton gestations who delivered at a USMC base were followed for 6 months to determine likelihood of returning to military weight standards. Odds ratio (OR), adjusted odds ratio (AOR) and 95% confidence intervals were calculated; p < 0.05 was considered significant. Similar proportions of USN and USMC ADW were within body weight standards one year prior to pregnancy (79%, 97%) and at first prenatal visit (69%, 96%), respectively. However, USMC ADW were significantly more likely to be within body weight standards at 3 months (AOR 4.30,1.28-14.43) and 6 months after delivery (AOR 9.94, 1.53-64.52) than USN ADW. Weight gained during pregnancy did not differ significantly for the two groups (40.4 lbs vs 44.2 lbs, p = 0.163). The likelihood of spontaneous vaginal delivery was significantly higher (OR 2.52, 1.20-5.27) and the mean birth weight was significantly lower (p = 0.0036) among USMC ADW as compared to USN ADW. Being within weight standards differs significantly for USMC and USN ADW after pregnancy.
Shen, Yongchun; Pang, Caishuang; Wu, Yanqiu; Li, Diandian; Wan, Chun; Liao, Zenglin; Yang, Ting; Chen, Lei; Wen, Fuqiang
2016-06-01
The usefulness of bronchoalveolar lavage fluid (BALF) CD4/CD8 ratio for diagnosing sarcoidosis has been reported in many studies with variable results. Therefore, we performed a meta-analysis to estimate the overall diagnostic accuracy of BALF CD4/CD8 ratio based on the bulk of published evidence. Studies published prior to June 2015 and indexed in PubMed, OVID, Web of Science, Scopus and other databases were evaluated for inclusion. Data on sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were pooled from included studies. Summary receiver operating characteristic (SROC) curves were used to summarize overall test performance. Deeks's funnel plot was used to detect publication bias. Sixteen publications with 1885 subjects met our inclusion criteria and were included in this meta-analysis. Summary estimates of the diagnostic performance of the BALF CD4/CD8 ratio were as follows: sensitivity, 0.70 (95%CI 0.64-0.75); specificity, 0.83 (95%CI 0.78-0.86); PLR, 4.04 (95%CI 3.13-5.20); NLR, 0.36 (95%CI 0.30-0.44); and DOR, 11.17 (95%CI 7.31-17.07). The area under the SROC curve was 0.84 (95%CI 0.81-0.87). There was no evidence of publication bias. Measuring the BALF CD4/CD8 ratio may assist in the diagnosis of sarcoidosis when interpreted in parallel with other diagnostic factors. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann
2014-04-01
Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Ershen; Jia, Chaoying; Tong, Gang; Qu, Pingping; Lan, Xiaoyu; Pang, Tao
2018-03-01
The receiver autonomous integrity monitoring (RAIM) is one of the most important parts in an avionic navigation system. Two problems need to be addressed to improve this system, namely, the degeneracy phenomenon and lack of samples for the standard particle filter (PF). However, the number of samples cannot adequately express the real distribution of the probability density function (i.e., sample impoverishment). This study presents a GPS receiver autonomous integrity monitoring (RAIM) method based on a chaos particle swarm optimization particle filter (CPSO-PF) algorithm with a log likelihood ratio. The chaos sequence generates a set of chaotic variables, which are mapped to the interval of optimization variables to improve particle quality. This chaos perturbation overcomes the potential for the search to become trapped in a local optimum in the particle swarm optimization (PSO) algorithm. Test statistics are configured based on a likelihood ratio, and satellite fault detection is then conducted by checking the consistency between the state estimate of the main PF and those of the auxiliary PFs. Based on GPS data, the experimental results demonstrate that the proposed algorithm can effectively detect and isolate satellite faults under conditions of non-Gaussian measurement noise. Moreover, the performance of the proposed novel method is better than that of RAIM based on the PF or PSO-PF algorithm.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
NASA Astrophysics Data System (ADS)
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-06-01
In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
TOO MANY MEN? SEX RATIOS AND WOMEN’S PARTNERING BEHAVIOR IN CHINA
Trent, Katherine; South, Scott J.
2011-01-01
The relative numbers of women and men are changing dramatically in China, but the consequences of these imbalanced sex ratios have received little attention. We merge data from the Chinese Health and Family Life Survey with community-level data from Chinese censuses to examine the relationship between cohort- and community-specific sex ratios and women’s partnering behavior. Consistent with demographic-opportunity theory and sociocultural theory, we find that high sex ratios (indicating more men relative to women) are associated with an increased likelihood that women marry before age 25. However, high sex ratios are also associated with an increased likelihood that women engage in premarital and extramarital sexual relationships and have had more than one sexual partner, findings consistent with demographic-opportunity theory but inconsistent with sociocultural theory. PMID:22199403
Nimesh, Manoj; Joon, Deepali; Pathak, Anil Kumar; Saluja, Daman
2013-11-01
Indian contribution to global burden of tuberculosis is about 26%. In the present study we have developed an in-house PCR assay using primers for sdaA gene of Mycobacterium tuberculosis and evaluated against already established primers devR, IS6110, MPB64, rpoB primers for diagnosis of pulmonary tuberculosis. Using universal sample preparation (USP) method, DNA was extracted from sputum specimens of 412 symptomatic patients from Delhi, India. The DNA so extracted was used as template for PCR amplification using primers targeting sdaA, devR, IS6110, MPB64 and rpoB genes. Out of 412, 149 specimens were considered positive based on composite reference standard (CRS) criteria. The in-house designed sdaA PCR showed high specificity (96.5%), the high positive likelihood ratio (28), the high sensitivity (95.9%), and the very low negative likelihood ratio (0.04) in comparison to CRS. Based on our results, the sdaA PCR assay can be considered as one of the most reliable diagnostic tests in comparison to other PCR based detection methods. Copyright © 2013 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Liou, Kevin; Negishi, Kazuaki; Ho, Suyen; Russell, Elizabeth A; Cranney, Greg; Ooi, Sze-Yuan
2016-08-01
Global longitudinal strain (GLS) is well validated and has important applications in contemporary clinical practice. The aim of this analysis was to evaluate the accuracy of resting peak GLS in the diagnosis of obstructive coronary artery disease (CAD). A systematic literature search was performed through July 2015 using four databases. Data were extracted independently by two authors and correlated before analyses. Using a random-effect model, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and summary area under the curve for GLS were estimated with their respective 95% CIs. Screening of 1,669 articles yielded 10 studies with 1,385 patients appropriate for inclusion in the analysis. The mean age and left ventricular ejection fraction were 59.9 years and 61.1%. On the whole, 54.9% and 20.9% of the patients had hypertension and diabetes, respectively. Overall, abnormal GLS detected moderate to severe CAD with a pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 74.4%, 72.1%, 2.9, and 0.35 respectively. The area under the curve and diagnostic odds ratio were 0.81 and 8.5. The mean values of GLS for those with and without CAD were -16.5% (95% CI, -15.8% to -17.3%) and -19.7% (95% CI, -18.8% to -20.7%), respectively. Subgroup analyses for patients with severe CAD and normal left ventricular ejection fractions yielded similar results. Current evidence supports the use of GLS in the detection of moderate to severe obstructive CAD in symptomatic patients. GLS may complement existing diagnostic algorithms and act as an early adjunctive marker of cardiac ischemia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Reliable and More Powerful Methods for Power Analysis in Structural Equation Modeling
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun
2017-01-01
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
ERIC Educational Resources Information Center
Immekus, Jason C.; Maller, Susan J.
2009-01-01
The Kaufman Adolescent and Adult Intelligence Test (KAIT[TM]) is an individually administered test of intelligence for individuals ranging in age from 11 to 85+ years. The item response theory-likelihood ratio procedure, based on the two-parameter logistic model, was used to detect differential item functioning (DIF) in the KAIT across males and…
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Huebinger, Ryan; Wilhelmsen, Kirk; Edwards, Melissa; Graff-Radford, Neill; Doody, Rachelle; Diaz-Arrastia, Ramon
2011-01-01
There is no rapid and cost effective tool that can be implemented as a front-line screening tool for Alzheimer's disease (AD) at the population level. To generate and cross-validate a blood-based screener for AD that yields acceptable accuracy across both serum and plasma. Analysis of serum biomarker proteins were conducted on 197 Alzheimer's disease (AD) participants and 199 control participants from the Texas Alzheimer's Research Consortium (TARC) with further analysis conducted on plasma proteins from 112 AD and 52 control participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI). The full algorithm was derived from a biomarker risk score, clinical lab (glucose, triglycerides, total cholesterol, homocysteine), and demographic (age, gender, education, APOE*E4 status) data. Alzheimer's disease. 11 proteins met our criteria and were utilized for the biomarker risk score. The random forest (RF) biomarker risk score from the TARC serum samples (training set) yielded adequate accuracy in the ADNI plasma sample (training set) (AUC = 0.70, sensitivity (SN) = 0.54 and specificity (SP) = 0.78), which was below that obtained from ADNI cerebral spinal fluid (CSF) analyses (t-tau/Aβ ratio AUC = 0.92). However, the full algorithm yielded excellent accuracy (AUC = 0.88, SN = 0.75, and SP = 0.91). The likelihood ratio of having AD based on a positive test finding (LR+) = 7.03 (SE = 1.17; 95% CI = 4.49-14.47), the likelihood ratio of not having AD based on the algorithm (LR-) = 3.55 (SE = 1.15; 2.22-5.71), and the odds ratio of AD were calculated in the ADNI cohort (OR) = 28.70 (1.55; 95% CI = 11.86-69.47). It is possible to create a blood-based screening algorithm that works across both serum and plasma that provides a comparable screening accuracy to that obtained from CSF analyses.
Xu, Mei-Mei; Jia, Hong-Yu; Yan, Li-Li; Li, Shan-Shan; Zheng, Yue
2017-01-01
Abstract Background: This meta-analysis aimed to provide a pooled analysis of prospective controlled trials comparing the diagnostic accuracy of 22-G and 25-G needles on endoscopic ultrasonography (EUS-FNA) of the solid pancreatic mass. Methods: We established a rigorous study protocol according to Cochrane Collaboration recommendations. We systematically searched the PubMed and Embase databases to identify articles to include in the meta-analysis. Sensitivity, specificity, and corresponding 95% confidence intervals were calculated for 22-G and 25-G needles of individual studies from the contingency tables. Results: Eleven prospective controlled trials included a total of 837 patients (412 with 22-G vs 425 with 25-G). Our outcomes revealed that 25-G needles (92% [95% CI, 89%–95%]) have higher sensitivity than 22-G needles (88% [95% CI, 84%–91%]) on solid pancreatic mass EUS-FNA (P = 0.046). However, there were no significant differences between the 2 groups in overall diagnostic specificity (P = 0.842). The pooled positive and negative likelihood ratio of the 22-G needle were 12.61 (95% CI, 5.65–28.14) and 0.16 (95% CI, 0.12–0.21), respectively. The pooled positive likelihood ratio was 12.61 (95% CI, 5.65–28.14), and the negative likelihood ratio was 0.16 (95% CI, 0.12–0.21) for the 22-G needle. The pooled positive likelihood ratio was 8.44 (95% CI, 3.87–18.42), and the negative likelihood ratio was 0.13 (95% CI, 0.09–0.18) for the 25-G needle. The area under the summary receiver operating characteristic curve was 0.97 for the 22-G needle and 0.96 for the 25-G needle. Conclusion: Compared to the study of 22-G EUS-FNA needles, our study showed that 25-G needles have superior sensitivity in the evaluation of solid pancreatic lesions by EUS–FNA. PMID:28151856
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
A Likelihood Ratio Test Regarding Two Nested But Oblique Order Restricted Hypotheses.
1982-11-01
Report #90 DIC JAN 2 411 ISMO. H American Mathematical Society 1979 subject classification Primary 62F03 Secondary 62E15 Key words and phrases: Order...model. A likelihood ratio test for these two restrictions is studied . Asa *a .on . r 373 RA&J *iii - ,sa~m muwod [] v~ -F: :.v"’. os "- 1...investigation was stimulated partly by a problem encountered in psychiatric research. [Winokur et al., 1971] studied data on psychiatric illnesses afflicting
Grandmothering life histories and human pair bonding.
Coxworth, James E; Kim, Peter S; McQueen, John S; Hawkes, Kristen
2015-09-22
The evolution of distinctively human life history and social organization is generally attributed to paternal provisioning based on pair bonds. Here we develop an alternative argument that connects the evolution of human pair bonds to the male-biased mating sex ratios that accompanied the evolution of human life history. We simulate an agent-based model of the grandmother hypothesis, compare simulated sex ratios to data on great apes and human hunter-gatherers, and note associations between a preponderance of males and mate guarding across taxa. Then we explore a recent model that highlights the importance of mating sex ratios for differences between birds and mammals and conclude that lessons for human evolution cannot ignore mammalian reproductive constraints. In contradiction to our claim that male-biased sex ratios are characteristically human, female-biased ratios are reported in some populations. We consider the likelihood that fertile men are undercounted and conclude that the mate-guarding hypothesis for human pair bonds gains strength from explicit links with our grandmothering life history.
Bazot, Marc; Daraï, Emile
2018-03-01
The aim of the present review, conducted according to PRISMA statement recommendations, was to evaluate the contribution of transvaginal sonography (TVS) and magnetic resonance imaging (MRI) to diagnose adenomyosis. Although there is a lack of consensus on adenomyosis classification, three subtypes are described, internal, external adenomyosis, and adenomyomas. Using TVS, whatever the subtype, pooled sensitivities, pooled specificities, and pooled positive likelihood ratios are 0.72-0.82, 0.85-0.81, and 4.67-3.7, respectively, but with a high heterogeneity between the studies. MRI has a pooled sensitivity of 0.77, specificity of 0.89, positive likelihood ratio of 6.5, and negative likelihood ratio of 0.2 for all subtypes. Our results suggest that MRI is more useful than TVS in the diagnosis of adenomyosis. Further studies are required to determine the performance of direct signs (cystic component) and indirect signs (characteristics of junctional zone) to avoid misdiagnosis of adenomyosis. Copyright © 2018 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Sun, Changling; Zhang, Yayun; Han, Xue; Du, Xiaodong
2018-03-01
Objective The purposes of this study were to verify the effectiveness of the narrow band imaging (NBI) system in diagnosing nasopharyngeal cancer (NPC) as compared with white light endoscopy. Data Sources PubMed, Cochrane Library, EMBASE, CNKI, and Wan Fang databases. Review Methods Data analyses were performed with Meta-Disc. The updated Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess study quality and potential bias. Publication bias was assessed with a Deeks asymmetry test. The registry number of the protocol published on PROSPERO is CRD42015026244. Results This meta-analysis included 10 studies of 1337 lesions. For NBI diagnosis of NPC, the pooled values were as follows: sensitivity, 0.83 (95% CI, 0.80-0.86); specificity, 0.91 (95% CI, 0.89-0.93); positive likelihood ratio, 8.82 (95% CI, 5.12-15.21); negative likelihood ratio, 0.18 (95% CI, 0.12-0.27); and diagnostic odds ratio, 65.73 (95% CI, 36.74-117.60). The area under the curve was 0.9549. For white light endoscopy in diagnosing NPC, the pooled values were as follows: sensitivity, 0.79 (95% CI, 0.75-0.83); specificity, 0.87 (95% CI, 0.84-0.90); positive likelihood ratio, 5.02 (95% CI, 1.99-12.65); negative likelihood ratio, 0.34 (95% CI, 0.24-0.49); and diagnostic odds ratio, 16.89 (95% CI, 5.98-47.66). The area under the curve was 0.8627. The evaluation of heterogeneity, calculated per the diagnostic odds ratio, gave an I 2 of 0.326. No marked publication bias ( P = .68) existed in this meta-analysis. Conclusion The sensitivity and specificity of NBI for the diagnosis of NPC are similar to those of white light endoscopy, and the potential value of NBI for the diagnosis of NPC needs to be validated further.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-08-01
(1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2-9 days) and 7 chronic time windows (14-35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R 2 ). The ratio of moderate speed running workload (18-24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R 2 =0.79) and in the immediate 2 or 5 days following matches (R 2 =0.76-0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98-2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-01-01
Aims (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Methods Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2–9 days) and 7 chronic time windows (14–35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R2). Results The ratio of moderate speed running workload (18–24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R2=0.79) and in the immediate 2 or 5 days following matches (R2=0.76–0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98–2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Conclusions Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. PMID:27789430
Stram, Daniel O; Leigh Pearce, Celeste; Bretsky, Phillip; Freedman, Matthew; Hirschhorn, Joel N; Altshuler, David; Kolonel, Laurence N; Henderson, Brian E; Thomas, Duncan C
2003-01-01
The US National Cancer Institute has recently sponsored the formation of a Cohort Consortium (http://2002.cancer.gov/scpgenes.htm) to facilitate the pooling of data on very large numbers of people, concerning the effects of genes and environment on cancer incidence. One likely goal of these efforts will be generate a large population-based case-control series for which a number of candidate genes will be investigated using SNP haplotype as well as genotype analysis. The goal of this paper is to outline the issues involved in choosing a method of estimating haplotype-specific risk estimates for such data that is technically appropriate and yet attractive to epidemiologists who are already comfortable with odds ratios and logistic regression. Our interest is to develop and evaluate extensions of methods, based on haplotype imputation, that have been recently described (Schaid et al., Am J Hum Genet, 2002, and Zaykin et al., Hum Hered, 2002) as providing score tests of the null hypothesis of no effect of SNP haplotypes upon risk, which may be used for more complex tasks, such as providing confidence intervals, and tests of equivalence of haplotype-specific risks in two or more separate populations. In order to do so we (1) develop a cohort approach towards odds ratio analysis by expanding the E-M algorithm to provide maximum likelihood estimates of haplotype-specific odds ratios as well as genotype frequencies; (2) show how to correct the cohort approach, to give essentially unbiased estimates for population-based or nested case-control studies by incorporating the probability of selection as a case or control into the likelihood, based on a simplified model of case and control selection, and (3) finally, in an example data set (CYP17 and breast cancer, from the Multiethnic Cohort Study) we compare likelihood-based confidence interval estimates from the two methods with each other, and with the use of the single-imputation approach of Zaykin et al. applied under both null and alternative hypotheses. We conclude that so long as haplotypes are well predicted by SNP genotypes (we use the Rh2 criteria of Stram et al. [1]) the differences between the three methods are very small and in particular that the single imputation method may be expected to work extremely well. Copyright 2003 S. Karger AG, Basel
Pan, Hui; Ba-Thein, William
2018-01-01
Global Pharma Health Fund (GPHF) Minilab™, a semi-quantitative thin-layer chromatography (TLC)-based commercially available test kit, is widely used in drug quality surveillance globally, but its diagnostic accuracy is unclear. We investigated the diagnostic accuracy of Minilab system for antimicrobials, using high-performance liquid chromatography (HPLC) as reference standard. Following the Minilab protocols and the Pharmacopoeia of the People's Republic of China protocols, Minilab-TLC and HPLC were used to test five common antimicrobials (506 batches) for relative concentration of active pharmaceutical ingredients. The prevalence of poor-quality antimicrobials determined, respectively, by Minilab TLC and HPLC was amoxicillin (0% versus 14.9%), azithromycin (0% versus 17.4%), cefuroxime axetil (14.3% versus 0%), levofloxacin (0% versus 3.0%), and metronidazole (0% versus 38.0%). The Minilab TLC had false-positive and false-negative detection rates of 2.6% (13/506) and 15.2% (77/506) accordingly, resulting in the following test characteristics: sensitivity 0%, specificity 97.0%, positive predictive value 0, negative predictive value 0.8, positive likelihood ratio 0, negative likelihood ratio 1.0, diagnostic odds ratio 0, and adjusted diagnostic odds ratio 0.2. This study demonstrates unsatisfying diagnostic accuracy of Minilab system in screening poor-quality antimicrobials of common use. Using Minilab as a stand-alone system for monitoring drug quality should be reconsidered.
Exact one-sided confidence limits for the difference between two correlated proportions.
Lloyd, Chris J; Moldovan, Max V
2007-08-15
We construct exact and optimal one-sided upper and lower confidence bounds for the difference between two probabilities based on matched binary pairs using well-established optimality theory of Buehler. Starting with five different approximate lower and upper limits, we adjust them to have coverage probability exactly equal to the desired nominal level and then compare the resulting exact limits by their mean size. Exact limits based on the signed root likelihood ratio statistic are preferred and recommended for practical use.
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.
2016-01-01
Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Arellano, M; Garcia-Caselles, M P; Pi-Figueras, M; Miralles, R; Torres, R M; Aguilera, A; Cervera, A M
2004-01-01
It was aimed at evaluating the clinical usefulness of the mini nutritional assessment (MNA) to identify malnutrition in elderly patients with cognitive impairment, admitted to a geriatric convalescence unit (intermediate care facility). Sixty-three patients with cognitive impairment were studied. Cognitive impairment was considered when mini mental state examination (MMSE) scores were below 21. MNA and a nutritional evaluation according to the sequential model of the American Institute of Nutrition (AIN) were performed at admission. According to the AIN criteria, malnutrition was considered, if there were abnormalities in at least one of the following parameters: albumin, cholesterol, body mass index (BMI), and branchial circumference. Based on these criteria, 27 patients (42.8%) proved to be undernourished at admission, whereas if taking the original MNA scores, 39 patients (61.9%) were undernourished, 23 (36.5%) were at risk of malnutrition, and 1 (1.5%) was normal. The analyzed population was divided in four categories (quartiles) of the MNA scores: very low (= 13.5), low (> 13.5 and = 16), intermediate (> 16 and = 18.5) and high (> 18.5). Likelihood ratios of each MNA quartile were obtained by dividing the percentage of patients in a given MNA category who were undernourished (according to AIN) by the percentage of patients in the same MNA category who were not undernourished. In the very low MNA quartile, this likelihood ratio was 2.79 and for the low MNA quartile it was 0.49. For intermediate and high MNA categories, likelihood ratios were 1.0 and 0.07 respectively. In the present study, MNA identified undernourished patients with a high clinical diagnostic impact value only, when very low scores (= 13) are obtained.
Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan
2016-04-01
The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.
Mohammadi, Seyed-Farzad; Sabbaghi, Mostafa; Z-Mehrjardi, Hadi; Hashemi, Hassan; Alizadeh, Somayeh; Majdi, Mercede; Taee, Farough
2012-03-01
To apply artificial intelligence models to predict the occurrence of posterior capsule opacification (PCO) after phacoemulsification. Farabi Eye Hospital, Tehran, Iran. Clinical-based cross-sectional study. The posterior capsule status of eyes operated on for age-related cataract and the need for laser capsulotomy were determined. After a literature review, data polishing, and expert consultation, 10 input variables were selected. The QUEST algorithm was used to develop a decision tree. Three back-propagation artificial neural networks were constructed with 4, 20, and 40 neurons in 2 hidden layers and trained with the same transfer functions (log-sigmoid and linear transfer) and training protocol with randomly selected eyes. They were then tested on the remaining eyes and the networks compared for their performance. Performance indices were used to compare resultant models with the results of logistic regression analysis. The models were trained using 282 randomly selected eyes and then tested using 70 eyes. Laser capsulotomy for clinically significant PCO was indicated or had been performed 2 years postoperatively in 40 eyes. A sample decision tree was produced with accuracy of 50% (likelihood ratio 0.8). The best artificial neural network, which showed 87% accuracy and a positive likelihood ratio of 8, was achieved with 40 neurons. The area under the receiver-operating-characteristic curve was 0.71. In comparison, logistic regression reached accuracy of 80%; however, the likelihood ratio was not measurable because the sensitivity was zero. A prototype artificial neural network was developed that predicted posterior capsule status (requiring capsulotomy) with reasonable accuracy. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...
2017-11-08
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ye, Xin; Garikapati, Venu M.; You, Daehyun
Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less
Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J
2017-08-01
Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.
Pan, Liping; Jia, Hongyan; Liu, Fei; Gao, Mengqiu; Sun, Huishan; Du, Boping; Sun, Qi; Xing, Aiying; Wei, Rongrong; Zhang, Zongde
2015-12-01
To evaluate the value of T-SPOT.TB assay in the diagnosis of pulmonary tuberculosis within different age groups. We analyzed 1 518 suspected pulmonary tuberculosis (PTB) patients who were admitted to the Beijing Chest Hospital from November 2012 to February 2014 and had valid T-SPOT.TB tests before anti-tuberculosis therapy. The 599 microbiologically and/or histopathologically-confirmed PTB patients (16-89 years old, 388 males and 211 females) and 235 non-TB patients (14-85 years old, 144 males and 91 females) were enrolled for the analysis of diagnostic performance of T-SPOT.TB, while patients with uncertain diagnosis or diagnosis based on clinical impression (n=684) were excluded from the analysis. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB were analyzed according to the final diagnosis. Furthermore, the diagnostic performance of T-SPOT.TB assay in the younger patients (14-59 years old) and elderly patients (60-89 years old) were also analyzed respectively. Categorical variables were compared by Pearson's Chi-square test, while continuous variables were compared by the Mann-Whitney U-test. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB in diagnosis of PTB were 90.1% (540/599), 65.5% (154/235), 86.9% (540/621), 72.3% (154/213), 2.61, and 0.15, respectively. The sensitivity and specificity of T-SPOT.TB assay were 92.6% (375/405) and 75.6% (99/131), respectively in the younger patients, and 85.0% (165/194), 52.9% (55/104) respectively in the elderly patients. The sensitivity and specificity of T-SPOT.TB assay in the younger patients were significantly higher than those in the elderly patients (P<0.01), and the spot forming cells in the younger PTB patients were significantly higher than in the elderly PTB patients [300 (126, 666)/10(6) PBMCs vs. 258 (79, 621)/10(6) PBMCs, P=0.037]. T-SPOT.TB is a promising test in the diagnosis of younger patients (14-59 years old) with suspected PTB, but the diagnostic performance in elderly patients (60-89 years old) is relatively reduced.
A readers' guide to the interpretation of diagnostic test properties: clinical example of sepsis.
Fischer, Joachim E; Bachmann, Lucas M; Jaeschke, Roman
2003-07-01
One of the most challenging practical and daily problems in intensive care medicine is the interpretation of the results from diagnostic tests. In neonatology and pediatric intensive care the early diagnosis of potentially life-threatening infections is a particularly important issue. A plethora of tests have been suggested to improve diagnostic decision making in the clinical setting of infection which is a clinical example used in this article. Several criteria that are critical to evidence-based appraisal of published data are often not adhered to during the study or in reporting. To enhance the critical appraisal on articles on diagnostic tests we discuss various measures of test accuracy: sensitivity, specificity, receiver operating characteristic curves, positive and negative predictive values, likelihood ratios, pretest probability, posttest probability, and diagnostic odds ratio. We suggest the following minimal requirements for reporting on the diagnostic accuracy of tests: a plot of the raw data, multilevel likelihood ratios, the area under the receiver operating characteristic curve, and the cutoff yielding the highest discriminative ability. For critical appraisal it is mandatory to report confidence intervals for each of these measures. Moreover, to allow comparison to the readers' patient population authors should provide data on study population characteristics, in particular on the spectrum of diseases and illness severity.
Koo, Hoon Jung; Han, Doug Hyun; Park, Sung-Yong
2017-01-01
Objective This study aimed to develop and validate a Structured Clinical Interview for Internet Gaming Disorder (SCI-IGD) in adolescents. Methods First, we generated preliminary items of the SCI-IGD based on the information from the DSM-5 literature reviews and expert consultations. Next, a total of 236 adolescents, from both community and clinical settings, were recruited to evaluate the psychometric properties of the SCI-IGD. Results First, the SCI-IGD was found to be consistent over the time period of about one month. Second, diagnostic concordances between the SCI-IGD and clinician's diagnostic impression were good to excellent. The Likelihood Ratio Positive and the Likelihood Ratio Negative estimates for the diagnosis of SCI-IGD were 10.93 and 0.35, respectively, indicating that SCI-IGD was ‘very useful test’ for identifying the presence of IGD and ‘useful test’ for identifying the absence of IGD. Third, SCI-IGD could identify disordered gamers from non-disordered gamers. Conclusion The implications and limitations of the study are also discussed. PMID:28096871
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Jacob, Laurent; Combes, Florence; Burger, Thomas
2018-06-18
We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Pan, Qun-Xiong; Su, Zi-Jian; Zhang, Jian-Hua; Wang, Chong-Ren; Ke, Shao-Ying
2015-01-01
People's Republic of China is one of the countries with the highest incidence of gastric cancer, accounting for 45% of all new gastric cancer cases in the world. Therefore, strong prognostic markers are critical for the diagnosis and survival of Chinese patients suffering from gastric cancer. Recent studies have begun to unravel the mechanisms linking the host inflammatory response to tumor growth, invasion and metastasis in gastric cancers. Based on this relationship between inflammation and cancer progression, several inflammation-based scores have been demonstrated to have prognostic value in many types of malignant solid tumors. To compare the prognostic value of inflammation-based prognostic scores and tumor node metastasis (TNM) stage in patients undergoing gastric cancer resection. The inflammation-based prognostic scores were calculated for 207 patients with gastric cancer who underwent surgery. Glasgow prognostic score (GPS), neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR), prognostic nutritional index (PNI), and prognostic index (PI) were analyzed. Linear trend chi-square test, likelihood ratio chi-square test, and receiver operating characteristic were performed to compare the prognostic value of the selected scores and TNM stage. In univariate analysis, preoperative serum C-reactive protein (P<0.001), serum albumin (P<0.001), GPS (P<0.001), PLR (P=0.002), NLR (P<0.001), PI (P<0.001), PNI (P<0.001), and TNM stage (P<0.001) were significantly associated with both overall survival and disease-free survival of patients with gastric cancer. In multivariate analysis, GPS (P=0.024), NLR (P=0.012), PI (P=0.001), TNM stage (P<0.001), and degree of differentiation (P=0.002) were independent predictors of gastric cancer survival. GPS and TNM stage had a comparable prognostic value and higher linear trend chi-square value, likelihood ratio chi-square value, and larger area under the receiver operating characteristic curve as compared to other inflammation-based prognostic scores. The present study indicates that preoperative GPS and TNM stage are robust predictors of gastric cancer survival as compared to NLR, PLR, PI, and PNI in patients undergoing tumor resection.
Pan, Qun-Xiong; Su, Zi-Jian; Zhang, Jian-Hua; Wang, Chong-Ren; Ke, Shao-Ying
2015-01-01
Background People’s Republic of China is one of the countries with the highest incidence of gastric cancer, accounting for 45% of all new gastric cancer cases in the world. Therefore, strong prognostic markers are critical for the diagnosis and survival of Chinese patients suffering from gastric cancer. Recent studies have begun to unravel the mechanisms linking the host inflammatory response to tumor growth, invasion and metastasis in gastric cancers. Based on this relationship between inflammation and cancer progression, several inflammation-based scores have been demonstrated to have prognostic value in many types of malignant solid tumors. Objective To compare the prognostic value of inflammation-based prognostic scores and tumor node metastasis (TNM) stage in patients undergoing gastric cancer resection. Methods The inflammation-based prognostic scores were calculated for 207 patients with gastric cancer who underwent surgery. Glasgow prognostic score (GPS), neutrophil lymphocyte ratio (NLR), platelet lymphocyte ratio (PLR), prognostic nutritional index (PNI), and prognostic index (PI) were analyzed. Linear trend chi-square test, likelihood ratio chi-square test, and receiver operating characteristic were performed to compare the prognostic value of the selected scores and TNM stage. Results In univariate analysis, preoperative serum C-reactive protein (P<0.001), serum albumin (P<0.001), GPS (P<0.001), PLR (P=0.002), NLR (P<0.001), PI (P<0.001), PNI (P<0.001), and TNM stage (P<0.001) were significantly associated with both overall survival and disease-free survival of patients with gastric cancer. In multivariate analysis, GPS (P=0.024), NLR (P=0.012), PI (P=0.001), TNM stage (P<0.001), and degree of differentiation (P=0.002) were independent predictors of gastric cancer survival. GPS and TNM stage had a comparable prognostic value and higher linear trend chi-square value, likelihood ratio chi-square value, and larger area under the receiver operating characteristic curve as compared to other inflammation-based prognostic scores. Conclusion The present study indicates that preoperative GPS and TNM stage are robust predictors of gastric cancer survival as compared to NLR, PLR, PI, and PNI in patients undergoing tumor resection. PMID:26124667
A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins
Knudsen, Bjarne; Miyamoto, Michael M.
2001-01-01
Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650
Martell, R F; Desmet, A L
2001-12-01
This study departed from previous research on gender stereotyping in the leadership domain by adopting a more comprehensive view of leadership and using a diagnostic-ratio measurement strategy. One hundred and fifty-one managers (95 men and 56 women) judged the leadership effectiveness of male and female middle managers by providing likelihood ratings for 14 categories of leader behavior. As expected, the likelihood ratings for some leader behaviors were greater for male managers, whereas for other leader behaviors, the likelihood ratings were greater for female managers or were no different. Leadership ratings revealed some evidence of a same-gender bias. Providing explicit verification of managerial success had only a modest effect on gender stereotyping. The merits of adopting a probabilistic approach in examining the perception and treatment of stigmatized groups are discussed.
Descatha, A; Dale, A-M; Franzblau, A; Coomes, J; Evanoff, B
2010-02-01
We evaluated the utility of physical examination manoeuvres in the prediction of carpal tunnel syndrome (CTS) in a population-based research study. We studied a cohort of 1108 newly employed workers in several industries. Each worker completed a symptom questionnaire, a structured physical examination and nerve conduction study. For each hand, our CTS case definition required both median nerve conduction abnormality and symptoms classified as "classic" or "probable" on a hand diagram. We calculated the positive predictive values and likelihood ratios for physical examination manoeuvres in subjects with and without symptoms. The prevalence of CTS in our cohort was 1.2% for the right hand and 1.0% for the left hand. The likelihood ratios of a positive test for physical provocative tests ranged from 2.0 to 3.3, and those of a negative test from 0.3 to 0.9. The post-test probability of positive testing was <50% for all strategies tested. Our study found that physical examination, alone or in combination with symptoms, was not predictive of CTS in a working population. We suggest using specific symptoms as a first-level screening tool, and nerve conduction study as a confirmatory test, as a case definition strategy in research settings.
A close examination of double filtering with fold change and t test in microarray analysis
2009-01-01
Background Many researchers use the double filtering procedure with fold change and t test to identify differentially expressed genes, in the hope that the double filtering will provide extra confidence in the results. Due to its simplicity, the double filtering procedure has been popular with applied researchers despite the development of more sophisticated methods. Results This paper, for the first time to our knowledge, provides theoretical insight on the drawback of the double filtering procedure. We show that fold change assumes all genes to have a common variance while t statistic assumes gene-specific variances. The two statistics are based on contradicting assumptions. Under the assumption that gene variances arise from a mixture of a common variance and gene-specific variances, we develop the theoretically most powerful likelihood ratio test statistic. We further demonstrate that the posterior inference based on a Bayesian mixture model and the widely used significance analysis of microarrays (SAM) statistic are better approximations to the likelihood ratio test than the double filtering procedure. Conclusion We demonstrate through hypothesis testing theory, simulation studies and real data examples, that well constructed shrinkage testing methods, which can be united under the mixture gene variance assumption, can considerably outperform the double filtering procedure. PMID:19995439
Models and analysis for multivariate failure time data
NASA Astrophysics Data System (ADS)
Shih, Joanna Huang
The goal of this research is to develop and investigate models and analytic methods for multivariate failure time data. We compare models in terms of direct modeling of the margins, flexibility of dependency structure, local vs. global measures of association, and ease of implementation. In particular, we study copula models, and models produced by right neutral cumulative hazard functions and right neutral hazard functions. We examine the changes of association over time for families of bivariate distributions induced from these models by displaying their density contour plots, conditional density plots, correlation curves of Doksum et al, and local cross ratios of Oakes. We know that bivariate distributions with same margins might exhibit quite different dependency structures. In addition to modeling, we study estimation procedures. For copula models, we investigate three estimation procedures. the first procedure is full maximum likelihood. The second procedure is two-stage maximum likelihood. At stage 1, we estimate the parameters in the margins by maximizing the marginal likelihood. At stage 2, we estimate the dependency structure by fixing the margins at the estimated ones. The third procedure is two-stage partially parametric maximum likelihood. It is similar to the second procedure, but we estimate the margins by the Kaplan-Meier estimate. We derive asymptotic properties for these three estimation procedures and compare their efficiency by Monte-Carlo simulations and direct computations. For models produced by right neutral cumulative hazards and right neutral hazards, we derive the likelihood and investigate the properties of the maximum likelihood estimates. Finally, we develop goodness of fit tests for the dependency structure in the copula models. We derive a test statistic and its asymptotic properties based on the test of homogeneity of Zelterman and Chen (1988), and a graphical diagnostic procedure based on the empirical Bayes approach. We study the performance of these two methods using actual and computer generated data.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
Exact one-sided confidence bounds for the risk ratio in 2 x 2 tables with structural zero.
Lloyd, Chris J; Moldovan, Max V
2007-12-01
This paper examines exact one-sided confidence limits for the risk ratio in a 2 x 2 table with structural zero. Starting with four approximate lower and upper limits, we adjust each using the algorithm of Buehler (1957) to arrive at lower (upper) limits that have exact coverage properties and are as large (small) as possible subject to coverage, as well as an ordering, constraint. Different Buehler limits are compared by their mean size, since all are exact in their coverage. Buehler limits based on the signed root likelihood ratio statistic are found to have the best performance and recommended for practical use. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
Langholz, Bryan; Thomas, Duncan C.; Stovall, Marilyn; Smith, Susan A.; Boice, John D.; Shore, Roy E.; Bernstein, Leslie; Lynch, Charles F.; Zhang, Xinbo; Bernstein, Jonine L.
2009-01-01
Summary Methods for the analysis of individually matched case-control studies with location-specific radiation dose and tumor location information are described. These include likelihood methods for analyses that just use cases with precise location of tumor information and methods that also include cases with imprecise tumor location information. The theory establishes that each of these likelihood based methods estimates the same radiation rate ratio parameters, within the context of the appropriate model for location and subject level covariate effects. The underlying assumptions are characterized and the potential strengths and limitations of each method are described. The methods are illustrated and compared using the WECARE study of radiation and asynchronous contralateral breast cancer. PMID:18647297
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
Recreating a functional ancestral archosaur visual pigment.
Chang, Belinda S W; Jönsson, Karolina; Kazmi, Manija A; Donoghue, Michael J; Sakmar, Thomas P
2002-09-01
The ancestors of the archosaurs, a major branch of the diapsid reptiles, originated more than 240 MYA near the dawn of the Triassic Period. We used maximum likelihood phylogenetic ancestral reconstruction methods and explored different models of evolution for inferring the amino acid sequence of a putative ancestral archosaur visual pigment. Three different types of maximum likelihood models were used: nucleotide-based, amino acid-based, and codon-based models. Where possible, within each type of model, likelihood ratio tests were used to determine which model best fit the data. Ancestral reconstructions of the ancestral archosaur node using the best-fitting models of each type were found to be in agreement, except for three amino acid residues at which one reconstruction differed from the other two. To determine if these ancestral pigments would be functionally active, the corresponding genes were chemically synthesized and then expressed in a mammalian cell line in tissue culture. The expressed artificial genes were all found to bind to 11-cis-retinal to yield stable photoactive pigments with lambda(max) values of about 508 nm, which is slightly redshifted relative to that of extant vertebrate pigments. The ancestral archosaur pigments also activated the retinal G protein transducin, as measured in a fluorescence assay. Our results show that ancestral genes from ancient organisms can be reconstructed de novo and tested for function using a combination of phylogenetic and biochemical methods.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Mehler, W Tyler; Keough, Michael J; Pettigrove, Vincent
2018-04-01
Three common false-negative scenarios have been encountered with amendment addition in whole-sediment toxicity identification evaluations (TIEs): dilution of toxicity by amendment addition (i.e., not toxic enough), not enough amendment present to reduce toxicity (i.e., too toxic), and the amendment itself elicits a toxic response (i.e., secondary amendment effect). One such amendment in which all 3 types of false-negatives have been observed is with the nonpolar organic amendment (activated carbon or powdered coconut charcoal). The objective of the present study was to reduce the likelihood of encountering false-negatives with this amendment and to increase the value of the whole-sediment TIE bioassay. To do this, the present study evaluated the effects of various activated carbon additions to survival, growth, emergence, and mean development rate of Chironomus tepperi. Using this information, an alternative method for this amendment was developed which utilized a combination of multiple amendment addition ratios based on wet weight (1%, lower likelihood of the secondary amendment effect; 5%, higher reduction of contaminant) and nonconventional endpoints (emergence, mean development rate). This alternative method was then validated in the laboratory (using spiked sediments) and with contaminated field sediments. Using these multiple activated carbon ratios in combination with additional endpoints (namely, emergence) reduced the likelihood of all 3 types of false-negatives and provided a more sensitive evaluation of risk. Environ Toxicol Chem 2018;37:1219-1230. © 2017 SETAC. © 2017 SETAC.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...
Parkinson's disease: a population-based investigation of life satisfaction and employment.
Gustafsson, Helena; Nordström, Peter; Stråhle, Stefan; Nordström, Anna
2015-01-01
To investigate relationships between individuals' socioeconomic situations and quality of life in working-aged subjects with Parkinson's disease. A population-based cohort comprising 1,432 people with Parkinson's disease and 1,135 matched controls, who responded to a questionnaire. Logistic regression analysis was performed to identify factors associated with life satisfaction and likelihood of employment. In multivariate analyses, Parkinson's disease was associated with an increased risk of dissatisfaction with life (odds ratio (OR) = 5.4, 95% confidence interval (95% CI) = 4.2-7.1) and reduced likelihood of employment (OR = 0.30, 95% CI = 0.25-0.37). Employers' support was associated with greater likelihood of employment (p < 0.001). Twenty-four percent of people with Parkinson's disease for ≥ 10 years remained employed and 6% worked full-time. People with Parkinson's disease also more frequently experienced work demands that exceeded their capacity; this factor and unemployment independently correlated with greater risk of dissatisfaction with life (both p < 0.05). People with Parkinson's disease have an increased risk of dissatisfaction with life. Employment situation is important for general life satisfaction among working-aged individuals. People with Parkinson's disease appear to find it difficult to meet the challenge of achieving a balanced employment situation.
A Study of Dim Object Detection for the Space Surveillance Telescope
2013-03-21
ENG-13-M-32 Abstract Current methods of dim object detection for space surveillance make use of a Gaussian log-likelihood-ratio-test-based...quantitatively comparing the efficacy of two methods for dim object detection , termed in this paper the point detector and the correlator, both of which rely... applications . It is used in national defense for detecting satellites. It is used to detecting space debris, which threatens both civilian and
Using permutations to detect dependence between time series
NASA Astrophysics Data System (ADS)
Cánovas, Jose S.; Guillamón, Antonio; Ruíz, María del Carmen
2011-07-01
In this paper, we propose an independence test between two time series which is based on permutations. The proposed test can be carried out by means of different common statistics such as Pearson’s chi-square or the likelihood ratio. We also point out why an exact test is necessary. Simulated and real data (return exchange rates between several currencies) reveal the capacity of this test to detect linear and nonlinear dependences.
Adult Age Differences in Frequency Estimations of Happy and Angry Faces
ERIC Educational Resources Information Center
Nikitin, Jana; Freund, Alexandra M.
2015-01-01
With increasing age, the ratio of gains to losses becomes more negative, which is reflected in expectations that positive events occur with a high likelihood in young adulthood, whereas negative events occur with a high likelihood in old age. Little is known about expectations of social events. Given that younger adults are motivated to establish…
Likelihood ratio-based integrated personal risk assessment of type 2 diabetes.
Sato, Noriko; Htun, Nay Chi; Daimon, Makoto; Tamiya, Gen; Kato, Takeo; Kubota, Isao; Ueno, Yoshiyuki; Yamashita, Hidetoshi; Fukao, Akira; Kayama, Takamasa; Muramatsu, Masaaki
2014-01-01
To facilitate personalized health care for multifactorial diseases, risks of genetic and clinical/environmental factors should be assessed together for each individual in an integrated fashion. This approach is possible with the likelihood ratio (LR)-based risk assessment system, as this system can incorporate manifold tests. We examined the usefulness of this system for assessing type 2 diabetes (T2D). Our system employed 29 genetic susceptibility variants, body mass index (BMI), and hypertension as risk factors whose LRs can be estimated from openly available T2D association data for the Japanese population. The pretest probability was set at a sex- and age-appropriate population average of diabetes prevalence. The classification performance of our LR-based risk assessment was compared to that of a non-invasive screening test for diabetes called TOPICS (with score based on age, sex, family history, smoking, BMI, and hypertension) using receiver operating characteristic analysis with a community cohort (n = 1263). The area under the receiver operating characteristic curve (AUC) for the LR-based assessment and TOPICS was 0.707 (95% CI 0.665-0.750) and 0.719 (0.675-0.762), respectively. These AUCs were much higher than that of a genetic risk score constructed using the same genetic susceptibility variants, 0.624 (0.574-0.674). The use of ethnically matched LRs is necessary for proper personal risk assessment. In conclusion, although LR-based integrated risk assessment for T2D still requires additional tests that evaluate other factors, such as risks involved in missing heritability, our results indicate the potential usability of LR-based assessment system and stress the importance of stratified epidemiological investigations in personalized medicine.
Elizabeth, Nabiwemba L; Christopher, Orach Garimoi; Patrick, Kolsteren
2013-04-12
Achieving Millennium Development Goal 4 is dependent on significantly reducing neonatal mortality. Low birth weight is an underlying factor in most neonatal deaths. In developing countries the missed opportunity for providing life saving care is mainly a result of failure to identify low birth weight newborns. This study aimed at identifying a reliable anthropometric measurement for screening low birth weight and determining an operational cut-off point in the Uganda setting. This simple measurement is required because of lack of weighing scales in the community, and sometimes in the health facilities. This was a hospital-based cross-sectional study. Two midwives weighed 706 newborns and measured their foot length, head, chest, thigh and mid-upper arm circumferences within 24 hours after birth.Data was analysed using STATA version 10.0. Correlation with birth weight using Pearson's correlation coefficient and Receiver Operating Characteristics curve analysis were done to determine the measure that best predicts birth weight. Sensitivity and specificity were calculated for a range of measures to obtain operational cut-off points; and Likelihood Ratios and Diagnostic Odds Ratio were determined for each cut-off point. Birth weights ranged from 1370-5350 grams with a mean of 3050 grams (SD 0.53) and 85 (12%) babies weighed less than 2500 grams. All anthropometric measurements had a positive correlation with birth weight, with foot length showing the strongest (r = 0.76) and thigh circumference the weakest (r = 0.62) correlations. Foot length had the highest predictive value for low birth weight (AUC = 0.97) followed by mid-upper arm circumference (AUC = 0.94). Foot length and chest circumference had the highest sensitivity (94%) and specificity (90%) respectively for screening low birth weight babies at the selected cut-off points. Chest circumference had a significantly higher positive likelihood ratio (8.7) than any other measure, and foot length had the lowest negative likelihood ratio. Chest circumference and foot length had diagnostic odds ratios of 97% and 77% respectively. Foot length was easier to measure and it involved minimal exposure of the baby to cold. A cut-off of foot length 7.9 cm had sensitivity of 94% and specificity of 83% for predicting low birth weight. This study suggests foot length as the most appropriate predictor for low birth weight in comparison to chest, head, mid-upper arm and thigh circumference in the Uganda setting. Use of low cost and easy to use tools to identify low birth weight babies by village health teams could support community efforts to save newborns.
Automated cross-identifying radio to infrared surveys using the LRPY algorithm: a case study
NASA Astrophysics Data System (ADS)
Weston, S. D.; Seymour, N.; Gulyaev, S.; Norris, R. P.; Banfield, J.; Vaccari, M.; Hopkins, A. M.; Franzen, T. M. O.
2018-02-01
Cross-identifying complex radio sources with optical or infra red (IR) counterparts in surveys such as the Australia Telescope Large Area Survey (ATLAS) has traditionally been performed manually. However, with new surveys from the Australian Square Kilometre Array Pathfinder detecting many tens of millions of radio sources, such an approach is no longer feasible. This paper presents new software (LRPY - Likelihood Ratio in PYTHON) to automate the process of cross-identifying radio sources with catalogues at other wavelengths. LRPY implements the likelihood ratio (LR) technique with a modification to account for two galaxies contributing to a sole measured radio component. We demonstrate LRPY by applying it to ATLAS DR3 and a Spitzer-based multiwavelength fusion catalogue, identifying 3848 matched sources via our LR-based selection criteria. A subset of 1987 sources have flux density values for all IRAC bands which allow us to use criteria to distinguish between active galactic nuclei (AGNs) and star-forming galaxies (SFG). We find that 936 radio sources ( ≈ 47 per cent) meet both of the Lacy and Stern AGN selection criteria. Of the matched sources, 295 have spectroscopic redshifts and we examine the radio to IR flux ratio versus redshift, proposing an AGN selection criterion below the Elvis radio-loud AGN limit for this dataset. Taking the union of all three AGNs selection criteria we identify 956 as AGNs ( ≈ 48 per cent). From this dataset, we find a decreasing fraction of AGNs with lower radio flux densities consistent with other results in the literature.
Estimating hazard ratios in cohort data with missing disease information due to death.
Binder, Nadine; Herrnböck, Anne-Sophie; Schumacher, Martin
2017-03-01
In clinical and epidemiological studies information on the primary outcome of interest, that is, the disease status, is usually collected at a limited number of follow-up visits. The disease status can often only be retrieved retrospectively in individuals who are alive at follow-up, but will be missing for those who died before. Right-censoring the death cases at the last visit (ad-hoc analysis) yields biased hazard ratio estimates of a potential risk factor, and the bias can be substantial and occur in either direction. In this work, we investigate three different approaches that use the same likelihood contributions derived from an illness-death multistate model in order to more adequately estimate the hazard ratio by including the death cases into the analysis: a parametric approach, a penalized likelihood approach, and an imputation-based approach. We investigate to which extent these approaches allow for an unbiased regression analysis by evaluating their performance in simulation studies and on a real data example. In doing so, we use the full cohort with complete illness-death data as reference and artificially induce missing information due to death by setting discrete follow-up visits. Compared to an ad-hoc analysis, all considered approaches provide less biased or even unbiased results, depending on the situation studied. In the real data example, the parametric approach is seen to be too restrictive, whereas the imputation-based approach could almost reconstruct the original event history information. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Johnston, Heidi Bart; Ganatra, Bela; Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Lema, Hailu Yeneneh; Constant, Deborah; Sen, Swapnaleen
2016-01-01
To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Diagnostic accuracy study. Ethiopia, India and South Africa. Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability.
Sviklāne, Laura; Olmane, Evija; Dzērve, Zane; Kupčs, Kārlis; Pīrāgs, Valdis; Sokolovska, Jeļizaveta
2018-01-01
Little is known about the diagnostic value of hepatic steatosis index (HSI) and fatty liver index (FLI), as well as their link to metabolic syndrome in type 1 diabetes mellitus. We have screened the effectiveness of FLI and HSI in an observational pilot study of 40 patients with type 1 diabetes. FLI and HSI were calculated for 201 patients with type 1 diabetes. Forty patients with FLI/HSI values corresponding to different risk of liver steatosis were invited for liver magnetic resonance study. In-phase/opposed-phase technique of magnetic resonance was used. Accuracy of indices was assessed from the area under the receiver operating characteristic curve. Twelve (30.0%) patients had liver steatosis. For FLI, sensitivity was 90%; specificity, 74%; positive likelihood ratio, 3.46; negative likelihood ratio, 0.14; positive predictive value, 0.64; and negative predictive value, 0.93. For HSI, sensitivity was 86%; specificity, 66%; positive likelihood ratio, 1.95; negative likelihood ratio, 0.21; positive predictive value, 0.50; and negative predictive value, 0.92. Area under the receiver operating characteristic curve for FLI was 0.86 (95% confidence interval [0.72; 0.99]); for HSI 0.75 [0.58; 0.91]. Liver fat correlated with liver enzymes, waist circumference, triglycerides, and C-reactive protein. FLI correlated with C-reactive protein, liver enzymes, and blood pressure. HSI correlated with waist circumference and C-reactive protein. FLI ≥ 60 and HSI ≥ 36 were significantly associated with metabolic syndrome and nephropathy. The tested indices, especially FLI, can serve as surrogate markers for liver fat content and metabolic syndrome in type 1 diabetes. © 2017 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Dasgupta, Subhankar; Dasgupta, Shyamal; Sharma, Partha Pratim; Mukherjee, Amitabha; Ghosh, Tarun Kumar
2011-11-01
To investigate the effect of oral progesterone on the accuracy of imaging studies performed to detect endometrial pathology in comparison to hysteroscopy-guided biopsy in perimenopausal women on progesterone treatment for abnormal uterine bleeding. The study population comprised of women aged 40-55 years with complaints of abnormal uterine bleeding who were also undergoing oral progesterone therapy. Women with a uterus ≥ 12 weeks' gestation size, previous abnormal endometrial biopsy, cervical lesion on speculum examination, abnormal Pap smear, active pelvic infection, adnexal mass on clinical examination or during ultrasound scan and a positive pregnancy test were excluded. A transvaginal ultrasound followed by saline infusion sonography were done. On the following day, a hysteroscopy followed by a guided biopsy of the endometrium or any endometrial lesion was performed. Comparison between the results of the imaging study with the hysteroscopy and guided biopsy was done. The final analysis included 83 patients. For detection of overall pathology, polyp and fibroid transvaginal ultrasound had a positive likelihood ratio of 1.65, 5.45 and 5.4, respectively, and a negative likelihood ratio of 0.47, 0.6 and 0.43, respectively. For detection of overall pathology, polyp and fibroid saline infusion sonography had a positive likelihood ratio of 4.4, 5.35 and 11.8, respectively, and a negative likelihood ratio of 0.3, 0.2 and 0.15, respectively. In perimenopausal women on oral progesterone therapy for abnormal uterine bleeding, imaging studies cannot be considered as an accurate method for diagnosing endometrial pathology when compared to hysteroscopy and guided biopsy. © 2011 The Authors. Journal of Obstetrics and Gynaecology Research © 2011 Japan Society of Obstetrics and Gynecology.
Safety from Crime and Physical Activity among Older Adults: A Population-Based Study in Brazil
Weber Corseuil, Maruí; Hallal, Pedro Curi; Xavier Corseuil, Herton; Jayce Ceola Schneider, Ione; d'Orsi, Eleonora
2012-01-01
Objective. To evaluate the association between safety from crime and physical activity among older adults. Methods. A population-based survey including 1,656 older adults (60+ years) took place in Florianopolis, Brazil, in 2009-2010. Commuting and leisure time physical activity were assessed through the long version of the International Physical Activity Questionnaire. Perception of safety from crime was assessed using the Neighbourhood Environment Walkability Scale. Results. Perceiving the neighbourhood as safe during the day was related to a 25% increased likelihood of being active in leisure time (95% CI 1.02–1.53); general perception of safety was also associated with a 25% increase in the likelihood of being active in leisure time (95% CI 1.01–1.54). Street lighting was related to higher levels of commuting physical activity (prevalence ratio: 1.89; 95% CI 1.28–2.80). Conclusions. Safety investments are essential for promoting physical activity among older adults in Brazil. PMID:22291723
Estimating the variance for heterogeneity in arm-based network meta-analysis.
Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R
2018-04-19
Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.
Early pregnancy angiogenic markers and spontaneous abortion: an Odense Child Cohort study.
Andersen, Louise B; Dechend, Ralf; Karumanchi, S Ananth; Nielsen, Jan; Joergensen, Jan S; Jensen, Tina K; Christesen, Henrik T
2016-11-01
Spontaneous abortion is the most commonly observed adverse pregnancy outcome. The angiogenic factors soluble Fms-like kinase 1 and placental growth factor are critical for normal pregnancy and may be associated to spontaneous abortion. We investigated the association between maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor, and subsequent spontaneous abortion. In the prospective observational Odense Child Cohort, 1676 pregnant women donated serum in early pregnancy, gestational week <22 (median 83 days of gestation, interquartile range 71-103). Concentrations of soluble Fms-like kinase 1 and placental growth factor were determined with novel automated assays. Spontaneous abortion was defined as complete or incomplete spontaneous abortion, missed abortion, or blighted ovum <22+0 gestational weeks, and the prevalence was 3.52% (59 cases). The time-dependent effect of maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor on subsequent late first-trimester or second-trimester spontaneous abortion (n = 59) was evaluated using a Cox proportional hazards regression model, adjusting for body mass index, parity, season of blood sampling, and age. Furthermore, receiver operating characteristics were employed to identify predictive values and optimal cut-off values. In the adjusted Cox regression analysis, increasing continuous concentrations of both soluble Fms-like kinase 1 and placental growth factor were significantly associated with a decreased hazard ratio for spontaneous abortion: soluble Fms-like kinase 1, 0.996 (95% confidence interval, 0.995-0.997), and placental growth factor, 0.89 (95% confidence interval, 0.86-0.93). When analyzed by receiver operating characteristic cut-offs, women with soluble Fms-like kinase 1 <742 pg/mL had an odds ratio for spontaneous abortion of 12.1 (95% confidence interval, 6.64-22.2), positive predictive value of 11.70%, negative predictive value of 98.90%, positive likelihood ratio of 3.64 (3.07-4.32), and negative likelihood ratio of 0.30 (0.19-0.48). For placental growth factor <19.7 pg/mL, odds ratio was 13.2 (7.09-24.4), positive predictive value was 11.80%, negative predictive value was 99.0%, positive likelihood ratio was 3.68 (3.12-4.34), and negative likelihood ratio was 0.28 (0.17-0.45). In the sensitivity analysis of 54 spontaneous abortions matched 1:4 to controls on gestational age at blood sampling, the highest area under the curve was seen for soluble Fms-like kinase 1 in prediction of first-trimester spontaneous abortion, 0.898 (0.834-0.962), and at the optimum cut-off of 725 pg/mL, negative predictive value was 51.4%, positive predictive value was 94.6%, positive likelihood ratio was 4.04 (2.57-6.35), and negative likelihood ratio was 0.22 (0.09-0.54). A strong, novel prospective association was identified between lower concentrations of soluble Fms-like kinase 1 and placental growth factor measured in early pregnancy and spontaneous abortion. A soluble Fms-like kinase 1 cut-off <742 pg/mL in maternal serum was optimal to stratify women at high vs low risk of spontaneous abortion. The cause and effect of angiogenic factor alterations in spontaneous abortions remain to be elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.
Durand, Eric; Bauer, Fabrice; Mansencal, Nicolas; Azarine, Arshid; Diebold, Benoit; Hagege, Albert; Perdrix, Ludivine; Gilard, Martine; Jobic, Yannick; Eltchaninoff, Hélène; Bensalah, Mourad; Dubourg, Benjamin; Caudron, Jérôme; Niarra, Ralph; Chatellier, Gilles; Dacher, Jean-Nicolas; Mousseaux, Elie
2017-08-15
To perform a head-to-head comparison of coronary CT angiography (CCTA) and dobutamine-stress echocardiography (DSE) in patients presenting recent chest pain when troponin and ECG are negative. Two hundred seventeen patients with recent chest pain, normal ECG findings, and negative troponin were prospectively included in this multicenter study and were scheduled for CCTA and DSE. Invasive coronary angiography (ICA), was performed in patients when either DSE or CCTA was considered positive or when both were non-contributive or in case of recurrent chest pain during 6month follow-up. The presence of coronary artery stenosis was defined as a luminal obstruction >50% diameter in any coronary segment at ICA. ICA was performed in 75 (34.6%) patients. Coronary artery stenosis was identified in 37 (17%) patients. For CCTA, the sensitivity was 96.9% (95% CI 83.4-99.9), specificity 48.3% (29.4-67.5), positive likelihood ratio 2.06 (95% CI 1.36-3.11), and negative likelihood ratio 0.07 (95% CI 0.01-0.52). The sensitivity of DSE was 51.6% (95% CI 33.1-69.9), specificity 46.7% (28.3-65.7), positive likelihood ratio 1.03 (95% CI 0.62-1.72), and negative likelihood ratio 1.10 (95% CI 0.63-1.93). The CCTA: DSE ratio of true-positive and false-positive rates was 1.70 (95% CI 1.65-1.75) and 1.00 (95% CI 0.91-1.09), respectively, when non-contributive CCTA and DSE were both considered positive. Only one missed acute coronary syndrome was observed at six months. CCTA has higher diagnostic performance than DSE in the evaluation of patients with recent chest pain, normal ECG findings, and negative troponine to exclude coronary artery disease. Copyright © 2017. Published by Elsevier B.V.
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
The evaluation of the OSGLR algorithm for restructurable controls
NASA Technical Reports Server (NTRS)
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
flowVS: channel-specific variance stabilization in flow cytometry.
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.
Relationship Formation and Stability in Emerging Adulthood: Do Sex Ratios Matter?
ERIC Educational Resources Information Center
Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.
2011-01-01
Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage (transitions in and out of dating or cohabiting relationships) is unknown. Utilizing data from the Toledo Adolescent Relationships Study and the 2000 U.S. Census, this study assesses whether sex ratios…
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
Experimental study of near-field entrainment of moderately overpressured jets
Solovitz, S.A.; Mastin, L.G.; Saffaraval, F.
2011-01-01
Particle image velocimetry (PIV) experiments have been conducted to study the velocity flow fields in the developing flow region of high-speed jets. These velocity distributions were examined to determine the entrained mass flow over a range of geometric and flow conditions, including overpressured cases up to an overpressure ratio of 2.83. In the region near the jet exit, all measured flows exhibited the same entrainment up until the location of the first shock when overpressured. Beyond this location, the entrainment was reduced with increasing overpressure ratio, falling to approximately 60 of the magnitudes seen when subsonic. Since entrainment ratios based on lower speed, subsonic results are typically used in one-dimensional volcanological models of plume development, the current analytical methods will underestimate the likelihood of column collapse. In addition, the concept of the entrainment ratio normalization is examined in detail, as several key assumptions in this methodology do not apply when overpressured.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Validation of the diagnostic score for acute lower abdominal pain in women of reproductive age.
Jearwattanakanok, Kijja; Yamada, Sirikan; Suntornlimsiri, Watcharin; Smuthtai, Waratsuda; Patumanond, Jayanton
2014-01-01
Background. The differential diagnoses of acute appendicitis obstetrics, and gynecological conditions (OB-GYNc) or nonspecific abdominal pain in young adult females with lower abdominal pain are clinically challenging. The present study aimed to validate the recently developed clinical score for the diagnosis of acute lower abdominal pain in female of reproductive age. Method. Medical records of reproductive age women (15-50 years) who were admitted for acute lower abdominal pain were collected. Validation data were obtained from patients admitted during a different period from the development data. Result. There were 302 patients in the validation cohort. For appendicitis, the score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39. The sensitivity, specificity, and positive likelihood ratio in diagnosis of OB-GYNc were 73.0%, 91.6%, and 8.73, respectively. The areas under the receiver operating curves (ROC), the positive likelihood ratios, for appendicitis and OB-GYNc in the validation data were not significantly different from the development data, implying similar performances. Conclusion. The clinical score developed for the diagnosis of acute lower abdominal pain in female of reproductive age may be applied to guide differential diagnoses in these patients.
Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter
2015-01-01
In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468
1996-09-01
Generalized Likelihood Ratio (GLR) and voting techniques. The third class consisted of multiple hypothesis filter detectors, specifically the MMAE. The...vector version, versus a tensor if we use the matrix version of the power spectral density estimate. Using this notation, we will derive an...as MATLAB , have an intrinsic sample covariance computation available, which makes this method quite easy to implement. In practice, the mean for the
Use of prior odds for missing persons identifications.
Budowle, Bruce; Ge, Jianye; Chakraborty, Ranajit; Gill-King, Harrell
2011-06-27
Identification of missing persons from mass disasters is based on evaluation of a number of variables and observations regarding the combination of features derived from these variables. DNA typing now is playing a more prominent role in the identification of human remains, and particularly so for highly decomposed and fragmented remains. The strength of genetic associations, by either direct or kinship analyses, is often quantified by calculating a likelihood ratio. The likelihood ratio can be multiplied by prior odds based on nongenetic evidence to calculate the posterior odds, that is, by applying Bayes' Theorem, to arrive at a probability of identity. For the identification of human remains, the path creating the set and intersection of variables that contribute to the prior odds needs to be appreciated and well defined. Other than considering the total number of missing persons, the forensic DNA community has been silent on specifying the elements of prior odds computations. The variables include the number of missing individuals, eyewitness accounts, anthropological features, demographics and other identifying characteristics. The assumptions, supporting data and reasoning that are used to establish a prior probability that will be combined with the genetic data need to be considered and justified. Otherwise, data may be unintentionally or intentionally manipulated to achieve a probability of identity that cannot be supported and can thus misrepresent the uncertainty with associations. The forensic DNA community needs to develop guidelines for objectively computing prior odds.
NASA Astrophysics Data System (ADS)
Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini
2014-12-01
In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.
[Usefulness of sputum Gram staining in community-acquired pneumonia].
Sato, Tadashi; Aoshima, Masahiro; Ohmagari, Norio; Tada, Hiroshi; Chohnabayashi, Naohiko
2002-07-01
To evaluate the usefulness of sputum gram staining in community-acquired pneumonia (CAP), we reviewed 144 cases requiring hospitalization in the last 4 years. The sensitivity was 75.5%, specificity 68.2%, positive predictive value 74.1%, negative predictive value 69.8%, positive likelihood ratio 2.37, negative likelihood ratio 0.36 and accuracy 72.2% in 97 cases. Both sputum gram staining and culture were performed. Concerning bacterial pneumonia (65 cases), we compared the Gram staining group (n = 33), which received initial antibiotic treatment, based on sputum gram staining with the Empiric group (n = 32) that received antibiotics empirically. The success rates of the initial antibiotic treatment were 87.9% vs. 78.1% (P = 0.473); mean hospitalization periods were 9.67 vs. 11.75 days (P = 0.053); and periods of intravenous therapy were 6.73 vs. 7.91 days (P = 0.044), respectively. As for initial treatment, penicillins were used in the Gram staining group more frequently (P < 0.01). We conclude that sputum gram staining is useful for the shortening of the treatment period and the appropriate selection of initial antibiotics in bacterial pneumonia. We believe, therefore, that sputum gram staining is indispensable as a diagnostic tool CAP.
Top pair production in the dilepton decay channel with a tau lepton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbo, Matteo
2012-09-19
The top quark pair production and decay into leptons with at least one being a τ lepton is studied in the framework of the CDF experiment at the Tevatron proton antiproton collider at Fermilab (USA). The selection requires an electron or a muon produced either by the τ lepton decay or by a W decay. The analysis uses the complete Run II data set i.e. 9.0 fb -1, selected by one trigger based on a low transverse momentum electron or muon plus one isolated charged track. The top quark pair production cross section at 1.96 TeV is measured at 8.2more » ± 1.7 +1.2 -1.1 ± 0.5 pb, and the top branching ratio into τ lepton is measured at 0.120 ± 0.027 +0.022 -0.019 ± 0.007 with statistical, systematics and luminosity uncertainties. These are up to date the most accurate results in this top decay channel and are in good agreement with the results obtained using other decay channels of the top at the Tevatron. The branching ratio is also measured separating the single lepton from the two leptons events with a log likelihood method. This is the first time these two signatures are separately identified. With a fit to data along the log-likelihood variable an alternative measurement of the branching ratio is made: 0.098 ± 0.022(stat:) ± 0.014(syst:); it is in good agreement with the expectations of the Standard Model (with lepton universality) within the experimental uncertainties. The branching ratio is constrained to be less than 0.159 at 95% con dence level. This limit translates into a limit of a top branching ratio into a potential charged Higgs boson.« less
Rossi, Maria C E; Nicolucci, Antonio; Pellegrini, Fabio; Comaschi, Marco; Ceriello, Antonio; Cucinotta, Domenico; Giorda, Carlo; Valentini, Umberto; Vespasiani, Giacomo; De Cosmo, Salvatore
2008-04-01
We evaluated to what extent the presence of risk factors and their interactions increased the likelihood of microalbuminuria (MAU) among individuals with type 2 diabetes. Fifty-five Italian diabetes outpatient clinics enrolled a sample of patients with type 2 diabetes, without urinary infections and overt diabetic nephropathy. A morning spot urine sample was collected to centrally determine the urinary albumin/creatinine ratio (ACR). A tree-based regression technique (RECPAM) and multivariate analyses were performed to investigate interaction between correlates of MAU. Of the 1841 patients recruited, 228 (12.4%) were excluded due to the presence of urinary infections and 56 (3.5%) for the presence of macroalbuminuria. Overall, the prevalence of MAU (ACR = 30-299 mg/g) was of 19.1%. The RECPAM algorithm led to the identification of seven classes showing a marked difference in the likelihood of MAU. Non-smoker patients with HbA1c <7% and waist circumference =102 cm showed the lowest prevalence of MAU (7.5%), and represented the reference class. Patients with retinopathy, waist circumference >98 cm and HbA1c >8% showed the highest likelihood of MAU (odds ratio = 13.7; 95% confidence intervals 6.8-27.6). In the other classes identified, the risk of MAU ranged between 3 and 5. Age, systolic blood pressure, HDL cholesterol levels and diabetes treatment represented additional, global correlates of MAU. The likelihood of MAU is strongly related to the interaction between diabetes severity, smoking habits and several components of the metabolic syndrome. In particular, abdominal obesity, elevated blood pressure levels and low HDL cholesterol levels substantially increase the risk of MAU. It is of primary importance to monitor MAU in high-risk individuals and aggressively intervene on modifiable risk factors.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Sinharay, Sandip
2017-03-01
Levine and Drasgow (1988) suggested an approach based on the Neyman-Pearson lemma to detect examinees whose response patterns are "aberrant" due to cheating, language issues, and so on. Belov (2016) used the approach of Levine and Drasgow (1988) to suggest a statistic based on the Neyman-Pearson Lemma (SBNPL) to detect item preknowledge when the investigator knows which items are compromised. This brief report proves that the SBNPL of Belov (2016) is equivalent to a statistic suggested for the same purpose by Drasgow, Levine, and Zickar 20 years ago.
Objectively combining AR5 instrumental period and paleoclimate climate sensitivity evidence
NASA Astrophysics Data System (ADS)
Lewis, Nicholas; Grünwald, Peter
2018-03-01
Combining instrumental period evidence regarding equilibrium climate sensitivity with largely independent paleoclimate proxy evidence should enable a more constrained sensitivity estimate to be obtained. Previous, subjective Bayesian approaches involved selection of a prior probability distribution reflecting the investigators' beliefs about climate sensitivity. Here a recently developed approach employing two different statistical methods—objective Bayesian and frequentist likelihood-ratio—is used to combine instrumental period and paleoclimate evidence based on data presented and assessments made in the IPCC Fifth Assessment Report. Probabilistic estimates from each source of evidence are represented by posterior probability density functions (PDFs) of physically-appropriate form that can be uniquely factored into a likelihood function and a noninformative prior distribution. The three-parameter form is shown accurately to fit a wide range of estimated climate sensitivity PDFs. The likelihood functions relating to the probabilistic estimates from the two sources are multiplicatively combined and a prior is derived that is noninformative for inference from the combined evidence. A posterior PDF that incorporates the evidence from both sources is produced using a single-step approach, which avoids the order-dependency that would arise if Bayesian updating were used. Results are compared with an alternative approach using the frequentist signed root likelihood ratio method. Results from these two methods are effectively identical, and provide a 5-95% range for climate sensitivity of 1.1-4.05 K (median 1.87 K).
Diagnostic Performance of Electronic Syndromic Surveillance Systems in Acute Care
Kashiouris, M.; O’Horo, J.C.; Pickering, B.W.; Herasevich, V.
2013-01-01
Context Healthcare Electronic Syndromic Surveillance (ESS) is the systematic collection, analysis and interpretation of ongoing clinical data with subsequent dissemination of results, which aid clinical decision-making. Objective To evaluate, classify and analyze the diagnostic performance, strengths and limitations of existing acute care ESS systems. Data Sources All available to us studies in Ovid MEDLINE, Ovid EMBASE, CINAHL and Scopus databases, from as early as January 1972 through the first week of September 2012. Study Selection: Prospective and retrospective trials, examining the diagnostic performance of inpatient ESS and providing objective diagnostic data including sensitivity, specificity, positive and negative predictive values. Data Extraction Two independent reviewers extracted diagnostic performance data on ESS systems, including clinical area, number of decision points, sensitivity and specificity. Positive and negative likelihood ratios were calculated for each healthcare ESS system. A likelihood matrix summarizing the various ESS systems performance was created. Results The described search strategy yielded 1639 articles. Of these, 1497 were excluded on abstract information. After full text review, abstraction and arbitration with a third reviewer, 33 studies met inclusion criteria, reporting 102,611 ESS decision points. The yielded I2 was high (98.8%), precluding meta-analysis. Performance was variable, with sensitivities ranging from 21% –100% and specificities ranging from 5%-100%. Conclusions There is significant heterogeneity in the diagnostic performance of the available ESS implements in acute care, stemming from the wide spectrum of different clinical entities and ESS systems. Based on the results, we introduce a conceptual framework using a likelihood ratio matrix for evaluation and meaningful application of future, frontline clinical decision support systems. PMID:23874359
Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis
NASA Technical Reports Server (NTRS)
Schoenwald, Adam; Kim, Seung-Jun; Mohammed-Tano, Priscilla
2017-01-01
Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interferece-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.
Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Kim, Seung-Jun; Mohammed, Priscilla N.
2017-01-01
Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interference-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
On the occurrence of false positives in tests of migration under an isolation with migration model
Hey, Jody; Chung, Yujin; Sethuraman, Arun
2015-01-01
The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794
Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.
Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C
2017-12-01
The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.
Subjective global assessment of nutritional status in children.
Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool
2010-10-01
This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
Shi, Hong-Bin; Yu, Jia-Xing; Yu, Jian-Xiu; Feng, Zheng; Zhang, Chao; Li, Guang-Yong; Zhao, Rui-Ning; Yang, Xiao-Bo
2017-08-03
Previous studies have revealed the importance of microRNAs' (miRNAs) function as biomarkers in diagnosing human bladder cancer (BC). However, the results are discordant. Consequently, the possibility of miRNAs to be BC biomarkers was summarized in this meta-analysis. In this study, the relevant articles were systematically searched from CBM, PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The bivariate model was used to calculate the pooled diagnostic parameters and summary receiver operator characteristic (SROC) curve in this meta-analysis, thereby estimating the whole predictive performance. STATA software was used during the whole analysis. Thirty-one studies from 10 articles, including 1556 cases and 1347 controls, were explored in this meta-analysis. In short, the pooled sensitivity, area under the SROC curve, specificity, positive likelihood ratio, diagnostic odds ratio, and negative likelihood ratio were 0.72 (95%CI 0.66-0.76), 0.80 (0.77-0.84), 0.76 (0.71-0.81), 3.0 (2.4-3.8), 8 (5.0-12.0), and 0.37 (0.30-0.46) respectively. Additionally, sub-group and meta-regression analyses revealed that there were significant differences between ethnicity, miRNA profiling, and specimen sub-groups. These results suggested that Asian population-based studies, multiple-miRNA profiling, and blood-based assays might yield a higher diagnostic accuracy than their counterparts. This meta-analysis demonstrated that miRNAs, particularly multiple miRNAs in the blood, might be novel, useful biomarkers with relatively high sensitivity and specificity and can be used for the diagnosis of BC. However, further prospective studies with more samples should be performed for further validation.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
Nasal Airway Microbiota Profile and Severe Bronchiolitis in Infants: A Case-control Study.
Hasegawa, Kohei; Linnemann, Rachel W; Mansbach, Jonathan M; Ajami, Nadim J; Espinola, Janice A; Petrosino, Joseph F; Piedra, Pedro A; Stevenson, Michelle D; Sullivan, Ashley F; Thompson, Amy D; Camargo, Carlos A
2017-11-01
Little is known about the relationship of airway microbiota with bronchiolitis in infants. We aimed to identify nasal airway microbiota profiles and to determine their association with the likelihood of bronchiolitis in infants. A case-control study was conducted. As a part of a multicenter prospective study, we collected nasal airway samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 110 age-matched healthy controls. By applying 16S ribosomal RNA gene sequencing and an unbiased clustering approach to these 150 nasal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. Overall, the median age was 3 months and 56% were male. Unbiased clustering of airway microbiota identified 4 distinct profiles: Moraxella-dominant profile (37%), Corynebacterium/Dolosigranulum-dominant profile (27%), Staphylococcus-dominant profile (15%) and mixed profile (20%). Proportion of bronchiolitis was lowest in infants with Moraxella-dominant profile (14%) and highest in those with Staphylococcus-dominant profile (57%), corresponding to an odds ratio of 7.80 (95% confidence interval, 2.64-24.9; P < 0.001). In the multivariable model, the association between Staphylococcus-dominant profile and greater likelihood of bronchiolitis persisted (odds ratio for comparison with Moraxella-dominant profile, 5.16; 95% confidence interval, 1.26-22.9; P = 0.03). By contrast, Corynebacterium/Dolosigranulum-dominant profile group had low proportion of infants with bronchiolitis (17%); the likelihood of bronchiolitis in this group did not significantly differ from those with Moraxella-dominant profile in both unadjusted and adjusted analyses. In this case-control study, we identified 4 distinct nasal airway microbiota profiles in infants. Moraxella-dominant and Corynebacterium/Dolosigranulum-dominant profiles were associated with low likelihood of bronchiolitis, while Staphylococcus-dominant profile was associated with high likelihood of bronchiolitis.
A new maximum-likelihood change estimator for two-pass SAR coherent change detection
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Jr., Charles V.; ...
2016-01-11
In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimatormore » is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.« less
Wang, Jiun-Hao; Chang, Hung-Hao
2010-10-26
In contrast to the considerable body of literature concerning the disabilities of the general population, little information exists pertaining to the disabilities of the farm population. Focusing on the disability issue to the insurants in the Farmers' Health Insurance (FHI) program in Taiwan, this paper examines the associations among socio-demographic characteristics, insured factors, and the introduction of the national health insurance program, as well as the types and payments of disabilities among the insurants. A unique dataset containing 1,594,439 insurants in 2008 was used in this research. A logistic regression model was estimated for the likelihood of received disability payments. By focusing on the recipients, a disability payment and a disability type equation were estimated using the ordinary least squares method and a multinomial logistic model, respectively, to investigate the effects of the exogenous factors on their received payments and the likelihood of having different types of disabilities. Age and different job categories are significantly associated with the likelihood of receiving disability payments. Compared to those under age 45, the likelihood is higher among recipients aged 85 and above (the odds ratio is 8.04). Compared to hired workers, the odds ratios for self-employed and spouses of farm operators who were not members of farmers' associations are 0.97 and 0.85, respectively. In addition, older insurants are more likely to have eye problems; few differences in disability types are related to insured job categories. Results indicate that older farmers are more likely to receive disability payments, but the likelihood is not much different among insurants of various job categories. Among all of the selected types of disability, a highest likelihood is found for eye disability. In addition, the introduction of the national health insurance program decreases the likelihood of receiving disability payments. The experience in Taiwan can be valuable for other countries that are in an initial stage to implement a universal health insurance program.
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals
Xu, Xu Steven; Yuan, Min; Yang, Haitao; Feng, Yan; Xu, Jinfeng; Pinheiro, Jose
2017-01-01
Covariate analysis based on population pharmacokinetics (PPK) is used to identify clinically relevant factors. The likelihood ratio test (LRT) based on nonlinear mixed effect model fits is currently recommended for covariate identification, whereas individual empirical Bayesian estimates (EBEs) are considered unreliable due to the presence of shrinkage. The objectives of this research were to investigate the type I error for LRT and EBE approaches, to confirm the similarity of power between the LRT and EBE approaches from a previous report and to explore the influence of shrinkage on LRT and EBE inferences. Using an oral one-compartment PK model with a single covariate impacting on clearance, we conducted a wide range of simulations according to a two-way factorial design. The results revealed that the EBE-based regression not only provided almost identical power for detecting a covariate effect, but also controlled the false positive rate better than the LRT approach. Shrinkage of EBEs is likely not the root cause for decrease in power or inflated false positive rate although the size of the covariate effect tends to be underestimated at high shrinkage. In summary, contrary to the current recommendations, EBEs may be a better choice for statistical tests in PPK covariate analysis compared to LRT. We proposed a three-step covariate modeling approach for population PK analysis to utilize the advantages of EBEs while overcoming their shortcomings, which allows not only markedly reducing the run time for population PK analysis, but also providing more accurate covariate tests.
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
Weemhoff, M; Kluivers, K B; Govaert, B; Evers, J L H; Kessels, A G H; Baeten, C G
2013-03-01
This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography.
2013-01-01
Background Achieving Millennium Development Goal 4 is dependent on significantly reducing neonatal mortality. Low birth weight is an underlying factor in most neonatal deaths. In developing countries the missed opportunity for providing life saving care is mainly a result of failure to identify low birth weight newborns. This study aimed at identifying a reliable anthropometric measurement for screening low birth weight and determining an operational cut-off point in the Uganda setting. This simple measurement is required because of lack of weighing scales in the community, and sometimes in the health facilities. Methods This was a hospital-based cross-sectional study. Two midwives weighed 706 newborns and measured their foot length, head, chest, thigh and mid-upper arm circumferences within 24 hours after birth. Data was analysed using STATA version 10.0. Correlation with birth weight using Pearson’s correlation coefficient and Receiver Operating Characteristics curve analysis were done to determine the measure that best predicts birth weight. Sensitivity and specificity were calculated for a range of measures to obtain operational cut-off points; and Likelihood Ratios and Diagnostic Odds Ratio were determined for each cut-off point. Results Birth weights ranged from 1370–5350 grams with a mean of 3050 grams (SD 0.53) and 85 (12%) babies weighed less than 2500 grams. All anthropometric measurements had a positive correlation with birth weight, with foot length showing the strongest (r = 0.76) and thigh circumference the weakest (r = 0.62) correlations. Foot length had the highest predictive value for low birth weight (AUC = 0.97) followed by mid-upper arm circumference (AUC = 0.94). Foot length and chest circumference had the highest sensitivity (94%) and specificity (90%) respectively for screening low birth weight babies at the selected cut-off points. Chest circumference had a significantly higher positive likelihood ratio (8.7) than any other measure, and foot length had the lowest negative likelihood ratio. Chest circumference and foot length had diagnostic odds ratios of 97% and 77% respectively. Foot length was easier to measure and it involved minimal exposure of the baby to cold. A cut-off of foot length 7.9 cm had sensitivity of 94% and specificity of 83% for predicting low birth weight. Conclusions This study suggests foot length as the most appropriate predictor for low birth weight in comparison to chest, head, mid-upper arm and thigh circumference in the Uganda setting. Use of low cost and easy to use tools to identify low birth weight babies by village health teams could support community efforts to save newborns. PMID:23587297
Identifying Malignant Pleural Effusion by A Cancer Ratio (Serum LDH: Pleural Fluid ADA Ratio).
Verma, Akash; Abisheganaden, John; Light, R W
2016-02-01
We studied the diagnostic potential of serum lactate dehydrogenase (LDH) in malignant pleural effusion. Retrospective analysis of patients hospitalized with exudative pleural effusion in 2013. Serum LDH and serum LDH: pleural fluid ADA ratio was significantly higher in cancer patients presenting with exudative pleural effusion. In multivariate logistic regression analysis, pleural fluid ADA was negatively correlated 0.62 (0.45-0.85, p = 0.003) with malignancy, whereas serum LDH 1.02 (1.0-1.03, p = 0.004) and serum LDH: pleural fluid ADA ratio 0.94 (0.99-1.0, p = 0.04) was correlated positively with malignant pleural effusion. For serum LDH: pleural fluid ADA ratio, a cut-off level of >20 showed sensitivity, specificity of 0.98 (95 % CI 0.92-0.99) and 0.94 (95 % CI 0.83-0.98), respectively. The positive likelihood ratio was 32.6 (95 % CI 10.7-99.6), while the negative likelihood ratio at this cut-off was 0.03 (95 % CI 0.01-0.15). Higher serum LDH and serum LDH: pleural fluid ADA ratio in patients presenting with exudative pleural effusion can distinguish between malignant and non-malignant effusion on the first day of hospitalization. The cut-off level for serum LDH: pleural fluid ADA ratio of >20 is highly predictive of malignancy in patients with exudative pleural effusion (whether lymphocytic or neutrophilic) with high sensitivity and specificity.
Can We Rule Out Meningitis from Negative Jolt Accentuation? A Retrospective Cohort Study.
Sato, Ryota; Kuriyama, Akira; Luthe, Sarah Kyuragi
2017-04-01
Jolt accentuation has been considered to be the most sensitive physical finding to predict meningitis. However, there are only a few studies assessing the diagnostic accuracy of jolt accentuation. Therefore, we aimed to evaluate the diagnostic accuracy of jolt accentuation and investigate whether it can be extended to patients with mild altered mental status. We performed a single center, retrospective observational study on patients who presented to the emergency department in a Japanese tertiary care center from January 1, 2010 to March 31, 2016. Jolt accentuation evaluated in patients with fever, headache, and mild altered mental status with Glasgow Coma Scale no lower than E2 or M4 was defined as "jolt accentuation in the broad sense." Jolt accentuation evaluated in patients with fever, headache, and no altered mental status was defined as "jolt accentuation in the narrow sense." We evaluated the sensitivity and specificity in both groups. Among 118 patients, the sensitivity and specificity of jolt accentuation in the broad sense were 70.7% (95% confidence interval (CI): 58.0%-80.8%) and 36.7% (95% CI: 25.6%-49.3%). The positive likelihood ratio and negative likelihood ratio were 1.12 (95% CI: 0.87-1.44) and 0.80 (95% CI: 0.48-1.34), respectively. Among 108 patients, the sensitivity and specificity of jot accentuation in the narrow sense were 75.0% (95% CI: 61.8%-84.8%) and 35.1% (95% CI: 24.0%-48.0%). The positive likelihood ratio and negative likelihood ratio were 1.16 (95% CI: 0.90-1.48) and 0.71 (95% CI: 0.40-1.28), respectively. Jolt accentuation itself has a limited value in the diagnosis of meningitis regardless of altered mental status. Therefore, meningitis should not be ruled out by negative jolt accentuation. © 2017 American Headache Society.
Sull, Jae Woong; Liang, Kung-Yee; Hetmanski, Jacqueline B.; Fallin, M. Daniele; Ingersoll, Roxanne G.; Park, Ji Wan; Wu-Chou, Yah-Huei; Chen, Philip K.; Chong, Samuel S.; Cheah, Felicia; Yeow, Vincent; Park, Beyoung Yun; Jee, Sun Ha; Jabs, Ethylin W.; Redett, Richard; Scott, Alan F.; Beaty, Terri H.
2009-01-01
Isolated cleft palate is among the most common human birth defects. The TCOF1 gene has been suggested as a candidate gene for cleft palate based on animal models. This study tests for association between markers in TCOF1 and isolated, nonsyndromic cleft palate using a case-parent trio design considering parent-of-origin effects. Case-parent trios from three populations (comprising a total of 81 case-parent trios) were genotyped for single nucleotide polymorphisms (SNPs) in the TCOF1 gene. We used the transmission disequilibrium test and the transmission asymmetry test on individual SNPs. When all trios were combined, the odds ratio for transmission of the minor allele, OR(transmission), was significant for SNP rs15251 (OR = 2.88, P = 0.007), as well as rs2255796 and rs2569062 (OR = 2.08, P = 0.03; OR = 2.43, P = 0.041; respectively) when parent of origin was not considered. The transmission asymmetry test also revealed one SNP (rs15251) showing excess maternal transmission significant at the P = 0.005 level (OR = 6.50). Parent-of-origin effects were assessed using the parent-of-origin likelihood ratio test on both SNPs and haplotypes. While the parent-of-origin likelihood ratio test was only marginally significant for this SNP (P = 0.136), analysis of haplotypes of rs2255796 and rs15251 suggested excess maternal transmission. Therefore, these data suggest TCOF1 may influence risk of cleft palate through a parent-of-origin effect. PMID:18688869
Impact of a diagnosis-related group payment system on cesarean section in Korea.
Kim, Seung Ju; Han, Kyu-Tae; Kim, Sun Jung; Park, Eun-Cheol; Park, Hye Ki
2016-06-01
Cesarean sections (CSs) are the most expensive method of delivery, which may affect the physician's choice of treatment when providing health services to patients. We investigated the effects of the diagnosis-related group (DRG)-based payment system on CSs in Korea. We used National Health Insurance claim data from 2011 to 2014, which included 1,289,989 delivery cases at 674 hospitals. We used a generalized estimating equation model to evaluate the association between the likelihood of cesarean delivery and the length of the DRG adoption period. A total of 477,309 (37.0%) delivery cases were performed by CSs. We found that a longer DRG adoption period was associated with a lower odds ratio of CSs (odds ratio [OR]: 0.997, 95% CI: 0.996-0.998). In addition, a longer DRG adoption period was associated with a lower odds ratio for CSs in hospitals that had voluntarily adopted the DRG system. Similar results were also observed for urban hospitals, primiparas, and those under 28 years old and over 33 years old. Our results suggest that the change in the reimbursement system was associated with a low likelihood of CSs. The impact of DRG adoption on cesarean delivery can also be expected to increase with time, as our finding provides evidence that the reimbursement system is associated with the health provider's decision to provide health services for patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Liu, W; Yin, W; Zhang, R; Li, J; Zheng, Y
2015-06-01
The aim of this study was to evaluate the predictive value of panoramic radiography on inferior alveolar nerve (IAN) injury after extraction of the mandibular third molar. Relevant studies up to 1 June 2014 that discussed the association of panoramic radiography signs and post-mandibular third molar extraction IAN injury were systematically retrieved from the databases of PubMed, Embase, Springerlink, Web of Science and Cochrane library. The effect size of pooled sensitivity, specificity, positive likelihood ratios (PLR), negative likelihood ratios (NLR) and diagnostic odds ratio (DOR) with their 95% confidence intervals (CI) were statistically analysed with Meta-disc 1.4 software. Nine articles were included in this meta-analysis. The pooled estimates of sensitivity and specificity were 0.56 (95% CI: 0.50-0.61) and 0.86 (95% CI: 0.84-0.87), respectively. The overall PLR was 3.46 (95% CI: 2.02-5.92) and overall NLR was 0.58 (95% CI: 0.45-0.73). The pooled estimate of DOR was 6.49 (95% CI: 2.92-14.44). The area under the summary receiver operating characteristic curve was 0.7143 ± 0.0604. The meta-analysis indicated that interpretation of panoramic radiography based on darkening of the root had a high specificity in predicting IAN injury after mandibular third molar extraction. However, the ability of this panoramic radiography marker to detect true positive IAN injury was not satisfactory. © 2015 Australian Dental Association.
Ye, Meng; Huang, Tao; Ying, Ying; Li, Jinyun; Yang, Ping; Ni, Chao; Zhou, Chongchang; Chen, Si
2017-01-01
As a tumor suppressor gene, 14-3-3 σ has been reported to be frequently methylated in breast cancer. However, the clinical effect of 14-3-3 σ promoter methylation remains to be verified. This study was performed to assess the clinicopathological significance and diagnostic value of 14-3-3 σ promoter methylation in breast cancer. 14-3-3 σ promoter methylation was found to be notably higher in breast cancer than in benign lesions and normal breast tissue samples. We did not observe that 14-3-3 σ promoter methylation was linked to the age status, tumor grade, clinic stage, lymph node status, histological subtype, ER status, PR status, HER2 status, or overall survival of patients with breast cancer. The combined sensitivity, specificity, AUC (area under the curve), positive likelihood ratios (PLR), negative likelihood ratios (NLR), diagnostic odds ratio (DOR), and post-test probability values (if the pretest probability was 30%) of 14-3-3 σ promoter methylation in blood samples of breast cancer patients vs. healthy subjects were 0.69, 0.99, 0.86, 95, 0.31, 302, and 98%, respectively. Our findings suggest that 14-3-3 σ promoter methylation may be associated with the carcinogenesis of breast cancer and that the use of 14-3-3 σ promoter methylation might represent a useful blood-based biomarker for the clinical diagnosis of breast cancer. PMID:27999208
Validation of the portable Air-Smart Spirometer
Núñez Fernández, Marta; Pallares Sanmartín, Abel; Mouronte Roibas, Cecilia; Cerdeira Domínguez, Luz; Botana Rial, Maria Isabel; Blanco Cid, Nagore; Fernández Villar, Alberto
2018-01-01
Background The Air-Smart Spirometer is the first portable device accepted by the European Community (EC) that performs spirometric measurements by a turbine mechanism and displays the results on a smartphone or a tablet. Methods In this multicenter, descriptive and cross-sectional prospective study carried out in 2 hospital centers, we compare FEV1, FVC, FEV1/FVC ratio measured with the Air Smart-Spirometer device and a conventional spirometer, and analyze the ability of this new portable device to detect obstructions. Patients were included for 2 consecutive months. We calculate sensitivity, specificity, positive and negative predictive value (PPV and NPV) and likelihood ratio (LR +, LR-) as well as the Kappa Index to evaluate the concordance between the two devices for the detection of obstruction. The agreement and relation between the values of FEV1 and FVC in absolute value and the FEV1/FVC ratio measured by both devices were analyzed by calculating the intraclass correlation coefficient (ICC) and the Pearson correlation coefficient (r) respectively. Results 200 patients (100 from each center) were included with a mean age of 57 (± 14) years, 110 were men (55%). Obstruction was detected by conventional spirometry in 73 patients (40.1%). Using a FEV1/FVC ratio smaller than 0.7 to detect obstruction with the Air Smart-Spirometer, the kappa index was 0.88, sensitivity (90.4%), specificity (97.2%), PPV (95.7%), NPV (93.7%), positive likelihood ratio (32.29), and negative likelihood ratio (0.10). The ICC and r between FEV1, FVC, and FEV1 / FVC ratio measured by the Air Smart Spirometer and the conventional spirometer were all higher than 0.94. Conclusion The Air-Smart Spirometer is a simple and very precise instrument for detecting obstructive airway diseases. It is easy to use, which could make it especially useful non-specialized care and in other areas. PMID:29474502
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
Sinharay, Sandip; Jensen, Jens Ledet
2018-06-27
In educational and psychological measurement, researchers and/or practitioners are often interested in examining whether the ability of an examinee is the same over two sets of items. Such problems can arise in measurement of change, detection of cheating on unproctored tests, erasure analysis, detection of item preknowledge, etc. Traditional frequentist approaches that are used in such problems include the Wald test, the likelihood ratio test, and the score test (e.g., Fischer, Appl Psychol Meas 27:3-26, 2003; Finkelman, Weiss, & Kim-Kang, Appl Psychol Meas 34:238-254, 2010; Glas & Dagohoy, Psychometrika 72:159-180, 2007; Guo & Drasgow, Int J Sel Assess 18:351-364, 2010; Klauer & Rettig, Br J Math Stat Psychol 43:193-206, 1990; Sinharay, J Educ Behav Stat 42:46-68, 2017). This paper shows that approaches based on higher-order asymptotics (e.g., Barndorff-Nielsen & Cox, Inference and asymptotics. Springer, London, 1994; Ghosh, Higher order asymptotics. Institute of Mathematical Statistics, Hayward, 1994) can also be used to test for the equality of the examinee ability over two sets of items. The modified signed likelihood ratio test (e.g., Barndorff-Nielsen, Biometrika 73:307-322, 1986) and the Lugannani-Rice approximation (Lugannani & Rice, Adv Appl Prob 12:475-490, 1980), both of which are based on higher-order asymptotics, are shown to provide some improvement over the traditional frequentist approaches in three simulations. Two real data examples are also provided.
Validation of a school-based amblyopia screening protocol in a kindergarten population.
Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L
2016-08-04
To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.
Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R
2007-01-01
Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
Liu, Bo-Ji; Li, Dan-Dan; Xu, Hui-Xiong; Guo, Le-Hang; Zhang, Yi-Feng; Xu, Jun-Mei; Liu, Chang; Liu, Lin-Na; Li, Xiao-Long; Xu, Xiao-Hong; Qu, Shen; Xing, Mingzhao
2015-12-01
The aim of this study was to evaluate the diagnostic performance of quantitative shear wave velocity (SWV) measurement on acoustic radiation force impulse (ARFI) elastography for differentiation between benign and malignant thyroid nodules using meta-analysis. The databases of PubMed and the Web of Science were searched. Studies published in English on assessment of the sensitivity and specificity of ARFI elastography for the differentiation of thyroid nodules were collected. The quantitative measurement of ARFI elastography was evaluated by SWV (m/s). Meta-Disc Version 1.4 software was used to describe and calculate the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio and summary receiver operating characteristic curves. We analyzed a total of 13 studies, which included 1,854 thyroid nodules (including 1,339 benign nodules and 515 malignant nodules) from 1,641 patients. The summary sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules by SWV were 0.81 (95% confidence interval [CI]: 0.77-0.84) and 0.84 (95% CI: 0.81-0.86), respectively. The pooled positive and negative likelihood ratios were 5.21 (95% CI: 3.56-7.62) and 0.23 (95% CI: 0.17-0.32), respectively. The pooled diagnostic odds ratio was 27.53 (95% CI: 14.58-52.01), and the area under the summary receiver operating characteristic curve was 0.91 (Q* = 0.84). In conclusion, SWV measurement on ARFI elastography has high sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules and can be used in combination with conventional ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Srkalović Imširagić, Azijada; Begić, Dražen; Šimičević, Livija; Bajić, Žarko
2017-02-01
Following childbirth, a vast number of women experience some degree of mood swings, while some experience symptoms of postpartum posttraumatic stress disorder. Using a biopsychosocial model, the primary aim of this study was to identify predictors of posttraumatic stress disorder and its symptomatology following childbirth. This observational, longitudinal study included 372 postpartum women. In order to explore biopsychosocial predictors, participants completed several questionnaires 3-5 days after childbirth: the Impact of Events Scale Revised, the Big Five Inventory, The Edinburgh Postnatal Depression Scale, breastfeeding practice and social and demographic factors. Six to nine weeks after childbirth, participants re-completed the questionnaires regarding psychiatric symptomatology and breastfeeding practice. Using a multivariate level of analysis, the predictors that increased the likelihood of postpartum posttraumatic stress disorder symptomatology at the first study phase were: emergency caesarean section (odds ratio 2.48; confidence interval 1.13-5.43) and neuroticism personality trait (odds ratio 1.12; confidence interval 1.05-1.20). The predictor that increased the likelihood of posttraumatic stress disorder symptomatology at the second study phase was the baseline Impact of Events Scale Revised score (odds ratio 12.55; confidence interval 4.06-38.81). Predictors that decreased the likelihood of symptomatology at the second study phase were life in a nuclear family (odds ratio 0.27; confidence interval 0.09-0.77) and life in a city (odds ratio 0.29; confidence interval 0.09-0.94). Biopsychosocial theory is applicable to postpartum psychiatric disorders. In addition to screening for depression amongst postpartum women, there is a need to include other postpartum psychiatric symptomatology screenings in routine practice. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi
2018-02-23
The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.
van Es, Andrew; Wiarda, Wim; Hordijk, Maarten; Alberink, Ivo; Vergeer, Peter
2017-05-01
For the comparative analysis of glass fragments, a method using Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) is in use at the NFI, giving measurements of the concentration of 18 elements. An important question is how to evaluate the results as evidence that a glass sample originates from a known glass source or from an arbitrary different glass source. One approach is the use of matching criteria e.g. based on a t-test or overlap of confidence intervals. An important drawback of this method is the fact that the rarity of the glass composition is not taken into account. A similar match can have widely different evidential values. In addition the use of fixed matching criteria can give rise to a "fall off the cliff" effect. Small differences may result in a match or a non-match. In this work a likelihood ratio system is presented, largely based on the two-level model as proposed by Aitken and Lucy [1], and Aitken, Zadora and Lucy [2]. Results show that the output from the two-level model gives good discrimination between same and different source hypotheses, but a post-hoc calibration step is necessary to improve the accuracy of the likelihood ratios. Subsequently, the robustness and performance of the LR system are studied. Results indicate that the output of the LR system is robust to the sample properties of the dataset used for calibration. Furthermore, the empirical upper and lower bound method [3], designed to deal with extrapolation errors in the density models, results in minimum and maximum values of the LR outputted by the system of 3.1×10 -3 and 3.4×10 4 . Calibration of the system, as measured by empirical cross-entropy, shows good behavior over the complete prior range. Rates of misleading evidence are small: for same-source comparisons, 0.3% of LRs support a different-source hypothesis; for different-source comparisons, 0.2% supports a same-source hypothesis. The authors use the LR system in reporting of glass cases to support expert opinion in the interpretation of glass evidence for origin of source questions. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
Mo, Phoenix K H; Lau, Joseph T F
2015-12-01
This study examined illness representations of new influenza Human Swine Influenza A (H1N1) and association with H1N1 preventive behaviors among 300 Chinese adults using a population-based randomized telephone survey. Results showed that relatively few participants thought H1N1 would have serious consequences (12%-15.7%) and few showed negative emotional responses toward H1N1 (9%-24.7%). The majority of the participants thought H1N1 could be controlled by treatment (70.4%-72.7%). Multiple logistic regression analyses showed that treatment control (odds ratio = 1.78) and psychological attribution (odds ratio = .75) were associated with intention to take up influenza vaccination. Emotional representations were associated with lower likelihood of wearing face mask (odds ratio = .77) and hand washing (odds ratio = .67). Results confirm that illness representation variables are associated with H1N1 preventive behaviors. © The Author(s) 2014.
Yang, Dong; James, Stefan; de Faire, Ulf; Alfredsson, Lars; Jernberg, Tomas; Moradi, Tahereh
2013-01-01
To examine the relationship between sex, country of birth, level of education as an indicator of socioeconomic position, and the likelihood of treatment in a coronary care unit (CCU) for a first-time myocardial infarction. Nationwide register based study. Sweden. 199 906 patients (114 387 men and 85,519 women) of all ages who were admitted to hospital for first-time myocardial infarction between 2001 and 2009. Admission to a coronary care unit due to myocardial infarction. Despite the observed increasing access to coronary care units over time, the proportion of women treated in a coronary care unit was 13% less than for men. As compared with men, the multivariable adjusted odds ratio among women was 0.80 (95% confidence interval 0.77 to 0.82). This lower proportion of women treated in a CCU varied by age and year of diagnosis and country of birth. Overall, there was no evidence of a difference in likelihood of treatment in a coronary care unit between Sweden-born and foreign-born patients. As compared with patients with high education, the adjusted odds ratio among patients with a low level of education was 0.93 (95% confidence interval 0.89 to 0.96). Foreign-born and Sweden-born first-time myocardial infarction patients had equal opportunity of being treated in a coronary care unit in Sweden; this is in contrast to the situation in many other countries with large immigrant populations. However, the apparent lower rate of coronary care unit admission after first-time myocardial infarction among women and patients with low socioeconomic position warrants further investigation.
Robert, Jérôme; Pantel, Alix; Merens, Audrey; Meiller, Elodie; Lavigne, Jean-Philippe; Nicolas-Chanoine, Marie-Hélène
2017-01-17
Carbapenemase-producing Enterobacteriaceae (CPE) are difficult to identify among carbapenem non-susceptible Enterobacteriaceae (NSE). We designed phenotypic strategies giving priority to high sensitivity for screening putative CPE before further testing. Presence of carbapenemase-encoding genes in ertapenem NSE (MIC > 0.5 mg/l) consecutively isolated in 80 French laboratories between November 2011 and April 2012 was determined by the Check-MDR-CT103 array method. Using the Mueller-Hinton (MH) disk diffusion method, clinical diameter breakpoints of carbapenems other than ertapenem, piperazicillin+tazobactam, ticarcillin+clavulanate and cefepime as well as diameter cut-offs for these antibiotics and temocillin were evaluated alone or combined to determine their performances (sensitivity, specificity, positive and negative likelihood ratios) for identifying putative CPE among these ertapenem-NSE isolates. To increase the screening specificity, these antibiotics were also tested on cloxacillin-containing MH when carbapenem NSE isolates belonged to species producing chromosomal cephalosporinase (AmpC) but Escherichia coli. Out of the 349 ertapenem NSE, 52 (14.9%) were CPE, including 39 producing OXA-48 group carbapenemase, eight KPC and five MBL. A screening strategy based on the following diameter cut offs, ticarcillin+clavulanate <15 mm, temocillin <15 mm, meropenem or imipenem <22 mm, and cefepime <26 mm, showed 100% sensitivity and 68.1% specificity with the better likelihood ratios combination. The specificity increased when a diameter cut-off <32 mm for imipenem (76.1%) or meropenem (78.8%) further tested on cloxacillin-containing MH was added to the previous strategy for AmpC-producing isolates. The proposed strategies that allowed for increasing the likelihood of CPE among ertapenem-NSE isolates should be considered as a surrogate for carbapenemase production before further CPE confirmatory testing.
Matthews, Lynn T; Ribaudo, Heather B; Kaida, Angela; Bennett, Kara; Musinguzi, Nicholas; Siedner, Mark J; Kabakyenga, Jerome; Hunt, Peter W; Martin, Jeffrey N; Boum, Yap; Haberer, Jessica E; Bangsberg, David R
2016-04-01
HIV-infected women risk sexual and perinatal HIV transmission during conception, pregnancy, childbirth, and breastfeeding. We compared HIV-1 RNA suppression and medication adherence across periconception, pregnancy, and postpartum periods, among women on antiretroviral therapy (ART) in Uganda. We analyzed data from women in a prospective cohort study, aged 18-49 years, enrolled at ART initiation and with ≥1 pregnancy between 2005 and 2011. Participants were seen quarterly. The primary exposure of interest was pregnancy period, including periconception (3 quarters before pregnancy), pregnancy, postpartum (6 months after pregnancy outcome), or nonpregnancy related. Regression models using generalized estimating equations compared the likelihood of HIV-1 RNA ≤400 copies per milliliter, <80% average adherence based on electronic pill caps (medication event monitoring system), and likelihood of 72-hour medication gaps across each period. One hundred eleven women contributed 486 person-years of follow-up. Viral suppression was present at 89% of nonpregnancy, 97% of periconception, 93% of pregnancy, and 89% of postpartum visits, and was more likely during periconception (adjusted odds ratio, 2.15) compared with nonpregnant periods. Average ART adherence was 90% [interquartile range (IQR), 70%-98%], 93% (IQR, 82%-98%), 92% (IQR, 72%-98%), and 88% (IQR, 63%-97%) during nonpregnant, periconception, pregnant, and postpartum periods, respectively. Average adherence <80% was less likely during periconception (adjusted odds ratio, 0.68), and 72-hour gaps per 90 days were less frequent during periconception (adjusted relative risk, 0.72) and more frequent during postpartum (adjusted relative risk, 1.40). Women with pregnancy were virologically suppressed at most visits, with an increased likelihood of suppression and high adherence during periconception follow-up. Increased frequency of 72-hour gaps suggests a need for increased adherence support during postpartum periods.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
Osteoporosis, vitamin C intake, and physical activity in Korean adults aged 50 years and over
Kim, Min Hee; Lee, Hae-Jeung
2016-01-01
[Purpose] To investigate associations between vitamin C intake, physical activity, and osteoporosis among Korean adults aged 50 and over. [Subjects and Methods] This study was based on bone mineral density measurement data from the 2008 to 2011 Korean National Health and Nutritional Examination Survey. The study sample comprised 3,047 subjects. The normal group was defined as T-score ≥ −1.0, and the osteoporosis group as T-score ≤ −2.5. The odds ratios for osteoporosis were assessed by logistic regression of each vitamin C intake quartile. [Results] Compared to the lowest quartile of vitamin C intake, the other quartiles showed a lower likelihood of osteoporosis after adjusting for age and gender. In the multi-variate model, the odds ratio for the likelihood of developing osteoporosis in the non-physical activity group significantly decreased to 0.66, 0.57, and 0.46 (p for trend = 0.0046). However, there was no significant decrease (0.98, 1.00, and 0.97) in the physical activity group. [Conclusion] Higher vitamin C intake levels were associated with a lower risk of osteoporosis in Korean adults aged over 50 with low levels of physical activity. However, no association was seen between vitamin C intake and osteoporosis risk in those with high physical activity levels. PMID:27134348
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Irwin, R John; Irwin, Timothy C
2011-06-01
Making clinical decisions on the basis of diagnostic tests is an essential feature of medical practice and the choice of the decision threshold is therefore crucial. A test's optimal diagnostic threshold is the threshold that maximizes expected utility. It is given by the product of the prior odds of a disease and a measure of the importance of the diagnostic test's sensitivity relative to its specificity. Choosing this threshold is the same as choosing the point on the Receiver Operating Characteristic (ROC) curve whose slope equals this product. We contend that a test's likelihood ratio is the canonical decision variable and contrast diagnostic thresholds based on likelihood ratio with two popular rules of thumb for choosing a threshold. The two rules are appealing because they have clear graphical interpretations, but they yield optimal thresholds only in special cases. The optimal rule can be given similar appeal by presenting indifference curves, each of which shows a set of equally good combinations of sensitivity and specificity. The indifference curve is tangent to the ROC curve at the optimal threshold. Whereas ROC curves show what is feasible, indifference curves show what is desirable. Together they show what should be chosen. Copyright © 2010 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Li, Bo; Sun, Zhiqiang; Li, Xiaohan; Li, Xiaoxi; Wang, Han; Chen, Weijiao; Chen, Peng; Qiao, Mengran; Mao, Yuanli
2017-04-01
There have been many inconsistent reports about the performance of histidine-rich protein 2 (HRP2) and lactate dehydrogenase (LDH) antigens as rapid diagnostic tests (RDTs) for the diagnosis of past Plasmodium falciparum infections. This meta-analysis was performed to determine the performance of pfHRP2 versus pLDH antigen RDTs in the detection of P. falciparum . After a systematic review of related studies, Meta-DiSc 1.4 software was used to calculate the pooled sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). Forest plots and summary receiver operating characteristic curve (SROC) analysis were used to summarize the overall test performance. Fourteen studies which met the inclusion criteria were included in the meta-analysis. The summary performances for pfHRP2- and pLDH-based tests in the diagnosis of P. falciparum infections were as follows: pooled sensitivity, 96.3% (95.8-96.7%) vs. 82.6% (81.7-83.5%); specificity, 86.1% (85.3-86.8%) vs. 95.9% (95.4-96.3%); diagnostic odds ratio (DOR), 243.31 (97.679-606.08) vs. 230.59 (114.98-462.42); and area under ROCs, 0.9822 versus 0.9849 (all p < 0.001). The two RDTs performed satisfactorily for the diagnosis of P. falciparum , but the pLDH tests had higher specificity, whereas the pfHRP2 tests had better sensitivity. The pfHRP2 tests had slightly greater accuracy compared to the pLDH tests. A combination of both antigens might be a more reliable approach for the diagnosis of malaria.
The continuum fusion theory of signal detection applied to a bi-modal fusion problem
NASA Astrophysics Data System (ADS)
Schaum, A.
2011-05-01
A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.
Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai
2014-11-10
Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.
Nadeau, Mélissa; Rosas-Arellano, M Patricia; Gurr, Kevin R; Bailey, Stewart I; Taylor, David C; Grewal, Ruby; Lawlor, D Kirk; Bailey, Chris S
2013-12-01
Intermittent claudication can be neurogenic or vascular. Physicians use a profile based on symptom attributes to differentiate the 2 types of claudication, and this guides their investigations for diagnosis of the underlying pathology. We evaluated the validity of these symptom attributes in differentiating neurogenic from vascular claudication. Patients with a diagnosis of lumbar spinal stenosis (LSS) or peripheral vascular disease (PVD) who reported claudication answered 14 questions characterizing their symptoms. We determined the sensitivity, specificity and positive and negative likelihood ratios (PLR and NLR) for neurogenic and vascular claudication for each symptom attribute. We studied 53 patients. The most sensitive symptom attribute to rule out LSS was the absence of "triggering of pain with standing alone" (sensitivity 0.97, NLR 0.050). Pain alleviators and symptom location data showed a weak clinical significance for LSS and PVD. Constellation of symptoms yielded the strongest associations: patients with a positive shopping cart sign whose symptoms were located above the knees, triggered with standing alone and relieved with sitting had a strong likelihood of neurogenic claudication (PLR 13). Patients with symptoms in the calf that were relieved with standing alone had a strong likelihood of vascular claudication (PLR 20.0). The classic symptom attributes used to differentiate neurogenic from vascular claudication are at best weakly valid independently. However, certain constellation of symptoms are much more indicative of etiology. These results can guide general practitioners in their evaluation of and investigation for claudication.
76 FR 18221 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-01
... Ratio Standard for a State's Individual Market; Use: Under section 2718 of the Public Health Service Act... data allows for the calculation of an issuer's medical loss ratio (MLR) by market (individual, small... whether market destabilization has a high likelihood of occurring. Form Number: CMS-10361 (OMB Control No...
Silveira, Maria J; Copeland, Laurel A; Feudtner, Chris
2006-07-01
We tested whether local cultural and social values regarding the use of health care are associated with the likelihood of home death, using variation in local rates of home births as a proxy for geographic variation in these values. For each of 351110 adult decedents in Washington state who died from 1989 through 1998, we calculated the home birth rate in each zip code during the year of death and then used multivariate regression modeling to estimate the relation between the likelihood of home death and the local rate of home births. Individuals residing in local areas with higher home birth rates had greater adjusted likelihood of dying at home (odds ratio [OR]=1.04 for each percentage point increase in home birth rate; 95% confidence interval [CI] = 1.03, 1.05). Moreover, the likelihood of dying at home increased with local wealth (OR=1.04 per $10000; 95% CI=1.02, 1.06) but decreased with local hospital bed availability (OR=0.96 per 1000 beds; 95% CI=0.95, 0.97). The likelihood of home death is associated with local rates of home births, suggesting the influence of health care use preferences.
Statistics and Discoveries at the LHC (1/4)
Cowan, Glen
2018-02-09
The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.
Statistics and Discoveries at the LHC (3/4)
Cowan, Glen
2018-02-19
The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.
Statistics and Discoveries at the LHC (4/4)
Cowan, Glen
2018-05-22
The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.
NASA Technical Reports Server (NTRS)
1976-01-01
Analytic techniques have been developed for detecting and identifying abrupt changes in dynamic systems. The GLR technique monitors the output of the Kalman filter and searches for the time that the failure occured, thus allowing it to be sensitive to new data and consequently increasing the chances for fast system recovery following detection of a failure. All failure detections are based on functional redundancy. Performance tests of the F-8 aircraft flight control system and computerized modelling of the technique are presented.
Statistics and Discoveries at the LHC (2/4)
Cowan, Glen
2018-04-26
The lectures will give an introduction to statistics as applied in particle physics and will provide all the necessary basics for data analysis at the LHC. Special emphasis will be placed on the the problems and questions that arise when searching for new phenomena, including p-values, discovery significance, limit setting procedures, treatment of small signals in the presence of large backgrounds. Specific issues that will be addressed include the advantages and drawbacks of different statistical test procedures (cut-based, likelihood-ratio, etc.), the look-elsewhere effect and treatment of systematic uncertainties.
Comparison between presepsin and procalcitonin in early diagnosis of neonatal sepsis.
Iskandar, Agustin; Arthamin, Maimun Z; Indriana, Kristin; Anshory, Muhammad; Hur, Mina; Di Somma, Salvatore
2018-05-09
Neonatal sepsis remains worldwide one of the leading causes of morbidity and mortality in both term and preterm infants. Lower mortality rates are related to timely diagnostic evaluation and prompt initiation of empiric antibiotic therapy. Blood culture, as gold standard examination for sepsis, has several limitations for early diagnosis, so that sepsis biomarkers could play an important role in this regard. This study was aimed to compare the value of the two biomarkers presepsin and procalcitonin in early diagnosis of neonatal sepsis. This was a prospective cross-sectional study performed, in Saiful Anwar General Hospital Malang, Indonesia, in 51 neonates that fulfill the criteria of systemic inflammatory response syndrome (SIRS) with blood culture as diagnostic gold standard for sepsis. At reviewer operating characteristic (ROC) curve analyses, using a presepsin cutoff of 706,5 pg/mL, the obtained area under the curve (AUCs) were: sensitivity = 85.7%, specificity = 68.8%, positive predictive value = 85.7%, negative predictive value = 68.8%, positive likelihood ratio = 2.75, negative likelihood ratio = 0.21, and accuracy = 80.4%. On the other hand, with a procalcitonin cutoff value of 161.33 pg/mL the obtained AUCs showed: sensitivity = 68.6%, specificity = 62.5%, positive predictive value = 80%, negative predictive value = 47.6%, positive likelihood ratio = 1.83, the odds ratio negative = 0.5, and accuracy = 66.7%. In early diagnosis of neonatal sepsis, compared with procalcitonin, presepsin seems to provide better early diagnostic value with consequent possible faster therapeutical decision making and possible positive impact on outcome of neonates.
The effect of rare variants on inflation of the test statistics in case-control analyses.
Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P
2015-02-20
The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.
Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç
2011-03-01
The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism.
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-02-10
Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-01-01
Background Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. Results A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Conclusion Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism. PMID:16472391
Yang, Ji; Gu, Hongya; Yang, Ziheng
2004-01-01
Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.
The Diagnostic Accuracy of Special Tests for Rotator Cuff Tear: The ROW Cohort Study
Jain, Nitin B.; Luz, Jennifer; Higgins, Laurence D.; Dong, Yan; Warner, Jon J.P.; Matzkin, Elizabeth; Katz, Jeffrey N.
2016-01-01
Objective The aim was to assess diagnostic accuracy of 15 shoulder special tests for rotator cuff tears. Design From 02/2011 to 12/2012, 208 participants with shoulder pain were recruited in a cohort study. Results Among tests for supraspinatus tears, Jobe’s test had a sensitivity of 88% (95% CI=80% to 96%), specificity of 62% (95% CI=53% to 71%), and likelihood ratio of 2.30 (95% CI=1.79 to 2.95). The full can test had a sensitivity of 70% (95% CI=59% to 82%) and a specificity of 81% (95% CI=74% to 88%). Among tests for infraspinatus tears, external rotation lag signs at 0° had a specificity of 98% (95% CI=96% to 100%) and a likelihood ratio of 6.06 (95% CI=1.30 to 28.33), and the Hornblower’s sign had a specificity of 96% (95% CI=93% to 100%) and likelihood ratio of 4.81 (95% CI=1.60 to 14.49). Conclusions Jobe’s test and full can test had high sensitivity and specificity for supraspinatus tears and Hornblower’s sign performed well for infraspinatus tears. In general, special tests described for subscapularis tears have high specificity but low sensitivity. These data can be used in clinical practice to diagnose rotator cuff tears and may reduce the reliance on expensive imaging. PMID:27386812
MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS
Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...
Castillo-Tandazo, Wilson; Flores-Fortty, Adolfo; Feraud, Lourdes; Tettamanti, Daniel
2013-01-01
Purpose To translate, cross-culturally adapt, and validate the Questionnaire for Diabetes-Related Foot Disease (Q-DFD), originally created and validated in Australia, for its use in Spanish-speaking patients with diabetes mellitus. Patients and methods The translation and cross-cultural adaptation were based on international guidelines. The Spanish version of the survey was applied to a community-based (sample A) and a hospital clinic-based sample (samples B and C). Samples A and B were used to determine criterion and construct validity comparing the survey findings with clinical evaluation and medical records, respectively; while sample C was used to determine intra- and inter-rater reliability. Results After completing the rigorous translation process, only four items were considered problematic and required a new translation. In total, 127 patients were included in the validation study: 76 to determine criterion and construct validity and 41 to establish intra- and inter-rater reliability. For an overall diagnosis of diabetes-related foot disease, a substantial level of agreement was obtained when we compared the Q-DFD with the clinical assessment (kappa 0.77, sensitivity 80.4%, specificity 91.5%, positive likelihood ratio [LR+] 9.46, negative likelihood ratio [LR−] 0.21); while an almost perfect level of agreement was obtained when it was compared with medical records (kappa 0.88, sensitivity 87%, specificity 97%, LR+ 29.0, LR− 0.13). Survey reliability showed substantial levels of agreement, with kappa scores of 0.63 and 0.73 for intra- and inter-rater reliability, respectively. Conclusion The translated and cross-culturally adapted Q-DFD showed good psychometric properties (validity, reproducibility, and reliability) that allow its use in Spanish-speaking diabetic populations. PMID:24039434
Wakefield, M A; Spittal, M J; Yong, H-H; Durkin, S J; Borland, R
2011-12-01
To assess the extent to which intensity and timing of televised anti-smoking advertising emphasizing the serious harms of smoking influences quit attempts. Using advertising gross rating points (GRPs), we estimated exposure to tobacco control and nicotine replacement therapy (NRT) advertising in the 3, 4-6, 7-9 and 10-12 months prior to follow-up of a replenished cohort of 3037 Australian smokers during 2002-08. Using generalized estimating equations, we related the intensity and timing of advertising exposure from each source to the likelihood of making a quit attempt in the 3 months prior to follow-up. Tobacco control advertising in the 3-month period prior to follow-up, but not in more distant past periods, was related to a higher likelihood of making a quit attempt. Each 1000 GRP increase per quarter was associated with an 11% increase in making a quit attempt [odds ratio (OR) = 1.11, 95% confidence interval (CI) 1.03-1.19, P = 0.009)]. NRT advertising was unrelated to quit attempts. Tobacco control advertising emphasizing the serious harms of smoking is associated with short-term increases in the likelihood of smokers making a quit attempt. Repeated cycles of higher intensity tobacco control media campaigns are needed to sustain high levels of quit attempts.
NASA Astrophysics Data System (ADS)
Coakley, Kevin J.; Vecchia, Dominic F.; Hussey, Daniel S.; Jacobson, David L.
2013-10-01
At the NIST Neutron Imaging Facility, we collect neutron projection data for both the dry and wet states of a Proton-Exchange-Membrane (PEM) fuel cell. Transmitted thermal neutrons captured in a scintillator doped with lithium-6 produce scintillation light that is detected by an amorphous silicon detector. Based on joint analysis of the dry and wet state projection data, we reconstruct a residual neutron attenuation image with a Penalized Likelihood method with an edge-preserving Huber penalty function that has two parameters that control how well jumps in the reconstruction are preserved and how well noisy fluctuations are smoothed out. The choice of these parameters greatly influences the resulting reconstruction. We present a data-driven method that objectively selects these parameters, and study its performance for both simulated and experimental data. Before reconstruction, we transform the projection data so that the variance-to-mean ratio is approximately one. For both simulated and measured projection data, the Penalized Likelihood method reconstruction is visually sharper than a reconstruction yielded by a standard Filtered Back Projection method. In an idealized simulation experiment, we demonstrate that the cross validation procedure selects regularization parameters that yield a reconstruction that is nearly optimal according to a root-mean-square prediction error criterion.
Golden, Sean K; Harringa, John B; Pickhardt, Perry J; Ebinger, Alexander; Svenson, James E; Zhao, Ying-Qi; Li, Zhanhai; Westergaard, Ryan P; Ehlenbach, William J; Repplinger, Michael D
2016-07-01
To determine whether clinical scoring systems or physician gestalt can obviate the need for computed tomography (CT) in patients with possible appendicitis. Prospective, observational study of patients with abdominal pain at an academic emergency department (ED) from February 2012 to February 2014. Patients over 11 years old who had a CT ordered for possible appendicitis were eligible. All parameters needed to calculate the scores were recorded on standardised forms prior to CT. Physicians also estimated the likelihood of appendicitis. Test characteristics were calculated using clinical follow-up as the reference standard. Receiver operating characteristic curves were drawn. Of the 287 patients (mean age (range), 31 (12-88) years; 60% women), the prevalence of appendicitis was 33%. The Alvarado score had a positive likelihood ratio (LR(+)) (95% CI) of 2.2 (1.7 to 3) and a negative likelihood ratio (LR(-)) of 0.6 (0.4 to 0.7). The modified Alvarado score (MAS) had LR(+) 2.4 (1.6 to 3.4) and LR(-) 0.7 (0.6 to 0.8). The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) score had LR(+) 1.3 (1.1 to 1.5) and LR(-) 0.5 (0.4 to 0.8). Physician-determined likelihood of appendicitis had LR(+) 1.3 (1.2 to 1.5) and LR(-) 0.3 (0.2 to 0.6). When combined with physician likelihoods, LR(+) and LR(-) was 3.67 and 0.48 (Alvarado), 2.33 and 0.45 (RIPASA), and 3.87 and 0.47 (MAS). The area under the curve was highest for physician-determined likelihood (0.72), but was not statistically significantly different from the clinical scores (RIPASA 0.67, Alvarado 0.72, MAS 0.7). Clinical scoring systems performed equally well as physician gestalt in predicting appendicitis. These scores do not obviate the need for imaging for possible appendicitis when a physician deems it necessary. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Less-Complex Method of Classifying MPSK
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2006-01-01
An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of carrier phase, of a complicated sum of functions of the baseband sample values, the carrier phase, the carrier-signal and noise magnitudes, and M or M'. Then the likelihood ratio, defined as the ratio between the likelihood functions, is computed, leading to the choice of whichever hypothesis - M or M'- is more likely. In the alternative method, the integral in each likelihood function is approximated by a sum over values of the integrand sampled at a number, 1, of equally spaced values of carrier phase. Used in this way, 1 is a parameter that can be adjusted to trade computational complexity against the probability of misclassification. In the limit as 1 approaches infinity, one obtains the integral form of the likelihood function and thus recovers the ML classification. The present approximate method has been tested in comparison with the ML method by means of computational simulations. The results of the simulations have shown that the performance (as quantified by probability of misclassification) of the approximate method is nearly indistinguishable from that of the ML method (see figure).
Instrument for evaluation of sedentary lifestyle in patients with high blood pressure.
Lopes, Marcos Venícios de Oliveira; da Silva, Viviane Martins; de Araujo, Thelma Leite; Guedes, Nirla Gomes; Martins, Larissa Castelo Guedes; Teixeira, Iane Ximenes
2015-01-01
this article describes the diagnostic accuracy of the International Physical Activity Questionnaire to identify the nursing diagnosis of sedentary lifestyle. a diagnostic accuracy study was developed with 240 individuals with established high blood pressure. The analysis of diagnostic accuracy was based on measures of sensitivity, specificity, predictive values, likelihood ratios, efficiency, diagnostic odds ratio, Youden index, and area under the receiver-operating characteristic curve. statistical differences between genders were observed for activities of moderate intensity and for total physical activity. Age was negatively correlated with activities of moderate intensity and total physical activity. the analysis of area under the receiver-operating characteristic curve for moderate intensity activities, walking, and total physical activity showed that the International Physical Activity Questionnaire present moderate capacity to correctly classify individuals with and without sedentary lifestyle.
NASA Technical Reports Server (NTRS)
Sung, Q. C.; Miller, L. D.
1977-01-01
Three methods were tested for collection of the training sets needed to establish the spectral signatures of the land uses/land covers sought due to the difficulties of retrospective collection of representative ground control data. Computer preprocessing techniques applied to the digital images to improve the final classification results were geometric corrections, spectral band or image ratioing and statistical cleaning of the representative training sets. A minimal level of statistical verification was made based upon the comparisons between the airphoto estimates and the classification results. The verifications provided a further support to the selection of MSS band 5 and 7. It also indicated that the maximum likelihood ratioing technique can achieve more agreeable classification results with the airphoto estimates than the stepwise discriminant analysis.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Sull, Jae Woong; Liang, Kung-Yee; Hetmanski, Jacqueline B; Fallin, M Daniele; Ingersoll, Roxanne G; Park, Ji Wan; Wu-Chou, Yah-Huei; Chen, Philip K; Chong, Samuel S; Cheah, Felicia; Yeow, Vincent; Park, Beyoung Yun; Jee, Sun Ha; Jabs, Ethylin W; Redett, Richard; Scott, Alan F; Beaty, Terri H
2008-09-15
Isolated cleft palate is among the most common human birth defects. The TCOF1 gene has been suggested as a candidate gene for cleft palate based on animal models. This study tests for association between markers in TCOF1 and isolated, nonsyndromic cleft palate using a case-parent trio design considering parent-of-origin effects. Case-parent trios from three populations (comprising a total of 81 case-parent trios) were genotyped for single nucleotide polymorphisms (SNPs) in the TCOF1 gene. We used the transmission disequilibrium test and the transmission asymmetry test on individual SNPs. When all trios were combined, the odds ratio for transmission of the minor allele, OR(transmission), was significant for SNP rs15251 (OR = 2.88, P = 0.007), as well as rs2255796 and rs2569062 (OR = 2.08, P = 0.03; OR = 2.43, P = 0.041; respectively) when parent of origin was not considered. The transmission asymmetry test also revealed one SNP (rs15251) showing excess maternal transmission significant at the P = 0.005 level (OR = 6.50). Parent-of-origin effects were assessed using the parent-of-origin likelihood ratio test on both SNPs and haplotypes. While the parent-of-origin likelihood ratio test was only marginally significant for this SNP (P = 0.136), analysis of haplotypes of rs2255796 and rs15251 suggested excess maternal transmission. Therefore, these data suggest TCOF1 may influence risk of cleft palate through a parent-of-origin effect. Copyright 2008 Wiley-Liss, Inc.
Li, Yan-Wei; Zhou, Le-Shan; Li, Xing
2017-03-15
Fever is the most common complaint in the pediatric and emergency departments. Caregivers prefer to detect fever in their children by tactile assessment. To summarize the evidence on the accuracy of caregivers' tactile assessment for detecting fever in children. We performed a literature search of Cochrane Library, PubMed, Web of Knowledge, EMBASE (ovid), EBSCO and Google Scholar, without restriction of publication date, to identify English articles assessing caregivers' ability of detecting fever in children by tactile assessment. Quality assessment was based on the 2011 Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria. Pooled estimates of sensitivity and specificity were calculated with use of bivariate model and summary receiver operation characteristics plots for meta-analysis. 11 articles were included in our analysis. The summary estimates for tactile assessment as a diagnostic tool revealed a sensitivity of 87.5% (95% CI 79.3% to 92.8%) and specificity of 54.6% (95% CI 38.5% to 69.9%). The pooled positive likelihood ratio was 1.93 (95% CI 1.39 to 2.67) and negative likelihood ratio was 0.23 (95% CI 0.15 to 0.36). Area under curve was 0.82 (95% CI 0.7 to 0.85). The pooled diagnostic odds ratio was 8.46 (95% CI 4.54 to 15.76). Tactile assessment of fever in children by palpation has moderate diagnostic value. Caregivers' assessment as "no fever" by touch is quite accurate in ruling out fever, while assessment as "fever" can be considered but needs confirmation.
Duration of Antimicrobial Treatment for Bacteremia in Canadian Critically Ill Patients.
Daneman, Nick; Rishu, Asgar H; Xiong, Wei; Bagshaw, Sean M; Dodek, Peter; Hall, Richard; Kumar, Anand; Lamontagne, Francois; Lauzier, Francois; Marshall, John; Martin, Claudio M; McIntyre, Lauralyn; Muscedere, John; Reynolds, Steve; Stelfox, Henry T; Cook, Deborah J; Fowler, Robert A
2016-02-01
The optimum duration of antimicrobial treatment for patients with bacteremia is unknown. Our objectives were to determine duration of antimicrobial treatment provided to patients who have bacteremia in ICUs, to assess pathogen/patient factors related to treatment duration, and to assess the relationship between treatment duration and survival. Retrospective cohort study. Fourteen ICUs across Canada. Patients with bacteremia and were present in the ICU at the time culture reported positive. Duration of antimicrobial treatment for patients who had bacteremia in ICU. Among 1,202 ICU patients with bacteremia, the median duration of treatment was 14 days, but with wide variability (interquartile range, 9-17.5). Most patient characteristics were not associated with treatment duration. Coagulase-negative staphylococci were the only pathogens associated with shorter treatment (odds ratio, 2.82; 95% CI, 1.51-5.26). The urinary tract was the only source of infection associated with a trend toward lower likelihood of shorter treatment (odds ratio, 0.67; 95% CI, 0.42-1.08); an unknown source of infection was associated with a greater likelihood of shorter treatment (odds ratio, 2.14; 95% CI, 1.17-3.91). The association of treatment duration and survival was unstable when analyzed based on timing of death. Critically ill patients who have bacteremia typically receive long courses of antimicrobials. Most patient/pathogen characteristics are not associated with treatment duration; survivor bias precludes a valid assessment of the association between treatment duration and survival. A definitive randomized controlled trial is needed to compare shorter versus longer antimicrobial treatment in patients who have bacteremia.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Poortinga, Ernest; Lemmen, Craig; Jibson, Michael D
2006-01-01
We examined the clinical, criminal, and sociodemographic characteristics of all white-collar crime defendants referred to the evaluation unit of a state center for forensic psychiatry. With 29,310 evaluations in a 12-year period, we found 70 defendants charged with embezzlement, 3 with health care fraud, and no other white-collar defendants (based on the eight crimes widely accepted as white-collar offenses). In a case-control study design, the 70 embezzlement cases were compared with 73 defendants charged with other forms of nonviolent theft. White-collar defendants were found to have a higher likelihood of white race (adjusted odds ratio (adj. OR) = 4.51), more years of education (adj. OR = 3471), and a lower likelihood of substance abuse (adj. OR = .28) than control defendants. Logistic regression modeling showed that the variance in the relationship between unipolar depression and white-collar crime was more economically accounted for by education, race, and substance abuse.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
NASA Astrophysics Data System (ADS)
Clark Lesko, Cherish Christina
Active learning methodologies (ALM) are associated with student success, but little research on this topic has been pursued at the community college level. At a local community college, students in science, technology, engineering, and math (STEM) courses exhibited lower than average grades. The purpose of this study was to examine whether the use of ALM predicted STEM course grades while controlling for academic discipline, course level, and class size. The theoretical framework was Vygotsky's social constructivism. Descriptive statistics and multinomial logistic regression were performed on data collected through an anonymous survey of 74 instructors of 272 courses during the 2016 fall semester. Results indicated that students were more likely to achieve passing grades when instructors employed in-class, highly structured activities, and writing-based ALM, and were less likely to achieve passing grades when instructors employed project-based or online ALM. The odds ratios indicated strong positive effects (greater likelihoods of receiving As, Bs, or Cs in comparison to the grade of F) for writing-based ALM (39.1-43.3%, 95% CI [10.7-80.3%]), highly structured activities (16.4-22.2%, 95% CI [1.8-33.7%]), and in-class ALM (5.0-9.0%, 95% CI [0.6-13.8%]). Project-based and online ALM showed negative effects (lower likelihoods of receiving As, Bs, or Cs in comparison to the grade of F) with odds ratios of 15.7-20.9%, 95% CI [9.7-30.6%] and 16.1-20.4%, 95% CI [5.9-25.2%] respectively. A white paper was developed with recommendations for faculty development, computer skills assessment and training, and active research on writing-based ALM. Improving student grades and STEM course completion rates could lead to higher graduation rates and lower college costs for at-risk students by reducing course repetition and time to degree completion.
Ali, Innocent M; Bigoga, Jude D; Forsah, Dorothy A; Cho-Ngwa, Fidelis; Tchinda, Vivian; Moor, Vicky Ama; Fogako, Josephine; Nyongalema, Philomena; Nkoa, Theresa; Same-Ekobo, Albert; Mbede, Joseph; Fondjo, Etienne; Mbacham, Wilfred F; Leke, Rose G F
2016-01-20
All suspected cases of malaria should receive a diagnostic test prior to treatment with artemisinin-based combinations based on the new WHO malaria treatment guidelines. This study compared the accuracy and some operational characteristics of 22 different immunochromatographic antigen capture point-of- malaria tests (RDTs) in Cameroon to inform test procurement prior to deployment of artemisinin-based combinations for malaria treatment. One hundred human blood samples (50 positive and 50 negative) collected from consenting febrile patients in two health centres at Yaoundé were used for evaluation of the 22 RDTs categorized as "Pf Only" (9) or "Pf + PAN" (13) based on parasite antigen captured [histidine rich protein II (HRP2) or lactate dehydrogenase (pLDH) or aldolase]. RDTs were coded to blind technicians performing the tests. The sensitivity, specificity, and predictive values of the positive and negative tests (PPV and NPV) as well as the likelihood ratios were assessed. The reliability and some operational characteristics were determined as the mean values from two assessors, and the Cohen's kappa statistic was then used to compare agreement. Light microscopy was the referent. Of all RDTs tested, 94.2 % (21/22) had sensitivity values greater than 90% among which 14 (63.6%) were 'Pf + PAN' RDTs. The specificity was generally lower than the sensitivity for all RDTs and poorer for "Pf Only" RDTs. The predictive values and likelihood ratios were better for non-HRP2 analytes for "Pf + PAN" RDTs. The Kappa value for most of the tests obtained was around 67% (95% CI 50-69%), corresponding to a moderate agreement. Overall, 94.2% (21/22) of RDTs tested had accuracy within the range recommended by the WHO, while one performed poorly, below acceptable levels. Seven "Pf + PAN" and 3 "Pf Only" RDTs were selected for further assessment based on performance characteristics. Harmonizing RDT test presentation and procedures would prevent mistakes of test performance and interpretation.
Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease
das Virgens, Cláudio Marcelo Bittencourt; Lemos Jr, Laudenor; Noya-Rabelo, Márcia; Carvalhal, Manuela Campelo; Cerqueira Junior, Antônio Maurício dos Santos; Lopes, Fernanda Oliveira de Andrade; de Sá, Nicole Cruz; Suerdieck, Jéssica Gonzalez; de Souza, Thiago Menezes Barbosa; Correia, Vitor Calixto de Almeida; Sodré, Gabriella Sant'Ana; da Silva, André Barcelos; Alexandre, Felipe Kalil Beirão; Ferreira, Felipe Rodrigues Marques; Correia, Luís Cláudio Lemos
2017-01-01
AIM To test accuracy and reproducibility of gestalt to predict obstructive coronary artery disease (CAD) in patients with acute chest pain. METHODS We studied individuals who were consecutively admitted to our Chest Pain Unit. At admission, investigators performed a standardized interview and recorded 14 chest pain features. Based on these features, a cardiologist who was blind to other clinical characteristics made unstructured judgment of CAD probability, both numerically and categorically. As the reference standard for testing the accuracy of gestalt, angiography was required to rule-in CAD, while either angiography or non-invasive test could be used to rule-out. In order to assess reproducibility, a second cardiologist did the same procedure. RESULTS In a sample of 330 patients, the prevalence of obstructive CAD was 48%. Gestalt’s numerical probability was associated with CAD, but the area under the curve of 0.61 (95%CI: 0.55-0.67) indicated low level of accuracy. Accordingly, categorical definition of typical chest pain had a sensitivity of 48% (95%CI: 40%-55%) and specificity of 66% (95%CI: 59%-73%), yielding a negligible positive likelihood ratio of 1.4 (95%CI: 0.65-2.0) and negative likelihood ratio of 0.79 (95%CI: 0.62-1.02). Agreement between the two cardiologists was poor in the numerical classification (95% limits of agreement = -71% to 51%) and categorical definition of typical pain (Kappa = 0.29; 95%CI: 0.21-0.37). CONCLUSION Clinical judgment based on a combination of chest pain features is neither accurate nor reproducible in predicting obstructive CAD in the acute setting. PMID:28400920
Heinrich, Verena; Kamphans, Tom; Mundlos, Stefan; Robinson, Peter N; Krawitz, Peter M
2017-01-01
Next generation sequencing technology considerably changed the way we screen for pathogenic mutations in rare Mendelian disorders. However, the identification of the disease-causing mutation amongst thousands of variants of partly unknown relevance is still challenging and efficient techniques that reduce the genomic search space play a decisive role. Often segregation- or linkage analysis are used to prioritize candidates, however, these approaches require correct information about the degree of relationship among the sequenced samples. For quality assurance an automated control of pedigree structures and sample assignment is therefore highly desirable in order to detect label mix-ups that might otherwise corrupt downstream analysis. We developed an algorithm based on likelihood ratios that discriminates between different classes of relationship for an arbitrary number of genotyped samples. By identifying the most likely class we are able to reconstruct entire pedigrees iteratively, even for highly consanguineous families. We tested our approach on exome data of different sequencing studies and achieved high precision for all pedigree predictions. By analyzing the precision for varying degrees of relatedness or inbreeding we could show that a prediction is robust down to magnitudes of a few hundred loci. A java standalone application that computes the relationships between multiple samples as well as a Rscript that visualizes the pedigree information is available for download as well as a web service at www.gene-talk.de CONTACT: heinrich@molgen.mpg.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Kamphans, Tom; Mundlos, Stefan; Robinson, Peter N.; Krawitz, Peter M.
2017-01-01
Motivation: Next generation sequencing technology considerably changed the way we screen for pathogenic mutations in rare Mendelian disorders. However, the identification of the disease-causing mutation amongst thousands of variants of partly unknown relevance is still challenging and efficient techniques that reduce the genomic search space play a decisive role. Often segregation- or linkage analysis are used to prioritize candidates, however, these approaches require correct information about the degree of relationship among the sequenced samples. For quality assurance an automated control of pedigree structures and sample assignment is therefore highly desirable in order to detect label mix-ups that might otherwise corrupt downstream analysis. Results: We developed an algorithm based on likelihood ratios that discriminates between different classes of relationship for an arbitrary number of genotyped samples. By identifying the most likely class we are able to reconstruct entire pedigrees iteratively, even for highly consanguineous families. We tested our approach on exome data of different sequencing studies and achieved high precision for all pedigree predictions. By analyzing the precision for varying degrees of relatedness or inbreeding we could show that a prediction is robust down to magnitudes of a few hundred loci. Availability and Implementation: A java standalone application that computes the relationships between multiple samples as well as a Rscript that visualizes the pedigree information is available for download as well as a web service at www.gene-talk.de. Contact: heinrich@molgen.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27565584
Tong, Xiang; Wang, Ye; Wang, Chengdi; Jin, Jing; Tian, Panwen; Li, Weimin
2018-01-01
Objectives Although different methods have been established to detect epidermal growth factor receptor (EGFR) T790M mutation in circulating tumor DNA (ctDNA), a wide range of diagnostic accuracy values were reported in previous studies. The aim of this meta-analysis was to provide pooled diagnostic accuracy measures for droplet digital PCR (ddPCR) in the diagnosis of EGFR T790M mutation based on ctDNA. Materials and methods A systematic review and meta-analysis were carried out based on resources from Pubmed, Web of Science, Embase and Cochrane Library up to October 11, 2017. Data were extracted to assess the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio (NLR), diagnostic OR (DOR), and areas under the summary receiver-operating characteristic curve (SROC). Results Eleven of 311 studies identified have met the including criteria. The sensitivity and specificity of ddPCR for the detection of T790M mutation in ctDNA ranged from 0.0% to 100.0% and 63.2% to 100.0%, respectively. For the pooled analysis, ddPCR had a performance of 70.1% (95% CI, 62.7%–76.7%) sensitivity, 86.9 % (95% CI, 80.6%–91.7%) specificity, 3.67 (95% CI, 2.33–5.79) PLR, 0.41 (95% CI, 0.32–0.55) NLR, and 10.83 (95% CI, 5.86–20.03) DOR, with the area under the SROC curve being 0.82. Conclusion The ddPCR harbored a good performance for detection of EGFR T790M mutation in ctDNA. PMID:29844700
Oshima, Shinji; Enjuji, Takako; Negishi, Akio; Akimoto, Hayato; Ohara, Kousuke; Okita, Mitsuyoshi; Numajiri, Sachihiko; Inoue, Naoko; Ohshima, Shigeru; Terao, Akira; Kobayashi, Daisuke
2017-09-01
In order to avoid adverse drug reactions (ADRs), pharmacists are reconstructing ADR-related information based on various types of data gathered from patients, and then providing this information to patients. Among the data provided to patients is the time-to-onset of ADRs after starting the medication (i.e., ADR onset timing information). However, a quantitative evaluation of the effect of onset timing information offered by pharmacists on the probability of ADRs occurring in patients receiving this information has not been reported to date. In this study, we extracted 40 ADR-drug combinations from the data in the Japanese Adverse Drug Event Report database. By applying Bayes' theorem to these combinations, we quantitatively evaluated the usefulness of onset timing information as an ADR detection predictor. As a result, when information on days after taking medication was added, 54 ADR-drug combinations showed a likelihood ratio (LR) in excess of 2. In particular, when considering the ADR-drug combination of anaphylactic shock with levofloxacin or loxoprofen, the number of days elapsed between start of medication and the onset of the ADR was 0, which corresponded to increased likelihood ratios (LRs) of 138.7301 or 58.4516, respectively. When information from 1-7 d after starting medication was added to the combination of liver disorder and acetaminophen, the LR was 11.1775. The results of this study indicate the clinical usefulness of offering information on ADR onset timing.
Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D
2004-09-01
We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.
Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He
2015-01-01
Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood or PF, are effective tools for diagnosis of HF. Additional studies are needed to rigorously evaluate the diagnostic accuracy of PF and blood MR-proANP and BNP for the diagnosis of HF. PMID:26244664
Babafemi, Emmanuel O; Cherian, Benny P; Banting, Lee; Mills, Graham A; Ngianga, Kandala
2017-10-25
Rapid and accurate diagnosis of tuberculosis (TB) is key to manage the disease and to control and prevent its transmission. Many established diagnostic methods suffer from low sensitivity or delay of timely results and are inadequate for rapid detection of Mycobacterium tuberculosis (MTB) in pulmonary and extra-pulmonary clinical samples. This study examined whether a real-time polymerase chain reaction (RT-PCR) assay, with a turn-a-round time of 2 h, would prove effective for routine detection of MTB by clinical microbiology laboratories. A systematic literature search was performed for publications in any language on the detection of MTB in pathological samples by RT-PCR assay. The following sources were used MEDLINE via PubMed, EMBASE, BIOSIS Citation Index, Web of Science, SCOPUS, ISI Web of Knowledge and Cochrane Infectious Diseases Group Specialised Register, grey literature, World Health Organization and Centres for Disease Control and Prevention websites. Forty-six studies met set inclusion criteria. Generated pooled summary estimates (95% CIs) were calculated for overall accuracy and bivariate meta-regression model was used for meta-analysis. Summary estimates for pulmonary TB (31 studies) were as follows: sensitivity 0.82 (95% CI 0.81-0.83), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 43.00 (28.23-64.81), negative likelihood ratio 0.16 (0.12-0.20), diagnostic odds ratio 324.26 (95% CI 189.08-556.09) and area under curve 0.99. Summary estimates for extra-pulmonary TB (25 studies) were as follows: sensitivity 0.70 (95% CI 0.67-0.72), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 29.82 (17.86-49.78), negative likelihood ratio 0.33 (0.26-0.42), diagnostic odds ratio 125.20 (95% CI 65.75-238.36) and area under curve 0.96. RT-PCR assay demonstrated a high degree of sensitivity for pulmonary TB and good sensitivity for extra-pulmonary TB. It indicated a high degree of specificity for ruling in TB infection from sampling regimes. This was acceptable, but may better as a rule out add-on diagnostic test. RT-PCR assays demonstrate both a high degree of sensitivity in pulmonary samples and rapidity of detection of TB which is an important factor in achieving effective global control and for patient management in terms of initiating early and appropriate anti-tubercular therapy. PROSPERO CRD42015027534 .
Robustness of fit indices to outliers and leverage observations in structural equation modeling.
Yuan, Ke-Hai; Zhong, Xiaoling
2013-06-01
Normal-distribution-based maximum likelihood (NML) is the most widely used method in structural equation modeling (SEM), although practical data tend to be nonnormally distributed. The effect of nonnormally distributed data or data contamination on the normal-distribution-based likelihood ratio (LR) statistic is well understood due to many analytical and empirical studies. In SEM, fit indices are used as widely as the LR statistic. In addition to NML, robust procedures have been developed for more efficient and less biased parameter estimates with practical data. This article studies the effect of outliers and leverage observations on fit indices following NML and two robust methods. Analysis and empirical results indicate that good leverage observations following NML and one of the robust methods lead most fit indices to give more support to the substantive model. While outliers tend to make a good model superficially bad according to many fit indices following NML, they have little effect on those following the two robust procedures. Implications of the results to data analysis are discussed, and recommendations are provided regarding the use of estimation methods and interpretation of fit indices. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Chun Chieh, E-mail: anna.lin@cancer.org; Bruinooge, Suanna S.; Kirkwood, M. Kelsey
Purpose: Trimodality therapy (chemoradiation and surgery) is the standard of care for stage II/III rectal cancer but nearly one third of patients do not receive radiation therapy (RT). We examined the relationship between the density of radiation oncologists and the travel distance to receipt of RT. Methods and Materials: A retrospective study based on the National Cancer Data Base identified 26,845 patients aged 18 to 80 years with stage II/III rectal cancer diagnosed from 2007 to 2010. Radiation oncologists were identified through the Physician Compare dataset. Generalized estimating equations clustering by hospital service area was used to examine the association betweenmore » geographic access and receipt of RT, controlling for patient sociodemographic and clinical characteristics. Results: Of the 26,845 patients, 70% received RT within 180 days of diagnosis or within 90 days of surgery. Compared with a travel distance of <12.5 miles, patients diagnosed at a reporting facility who traveled ≥50 miles had a decreased likelihood of receipt of RT (50-249 miles, adjusted odds ratio 0.75, P<.001; ≥250 miles, adjusted odds ratio 0.46; P=.002), all else being equal. The density level of radiation oncologists was not significantly associated with the receipt of RT. Patients who were female, nonwhite, and aged ≥50 years and had comorbidities were less likely to receive RT (P<.05). Patients who were uninsured but self-paid for their medical services, initially diagnosed elsewhere but treated at a reporting facility, and resided in Midwest had an increased the likelihood of receipt of RT (P<.05). Conclusions: An increased travel burden was associated with a decreased likelihood of receiving RT for patients with stage II/III rectal cancer, all else being equal; however, radiation oncologist density was not. Further research of geographic access and establishing transportation assistance programs or lodging services for patients with an unmet need might help decrease geographic barriers and improve the quality of rectal cancer care.« less
Lin, Chun Chieh; Bruinooge, Suanna S; Kirkwood, M Kelsey; Hershman, Dawn L; Jemal, Ahmedin; Guadagnolo, B Ashleigh; Yu, James B; Hopkins, Shane; Goldstein, Michael; Bajorin, Dean; Giordano, Sharon H; Kosty, Michael; Arnone, Anna; Hanley, Amy; Stevens, Stephanie; Olsen, Christine
2016-03-15
Trimodality therapy (chemoradiation and surgery) is the standard of care for stage II/III rectal cancer but nearly one third of patients do not receive radiation therapy (RT). We examined the relationship between the density of radiation oncologists and the travel distance to receipt of RT. A retrospective study based on the National Cancer Data Base identified 26,845 patients aged 18 to 80 years with stage II/III rectal cancer diagnosed from 2007 to 2010. Radiation oncologists were identified through the Physician Compare dataset. Generalized estimating equations clustering by hospital service area was used to examine the association between geographic access and receipt of RT, controlling for patient sociodemographic and clinical characteristics. Of the 26,845 patients, 70% received RT within 180 days of diagnosis or within 90 days of surgery. Compared with a travel distance of <12.5 miles, patients diagnosed at a reporting facility who traveled ≥50 miles had a decreased likelihood of receipt of RT (50-249 miles, adjusted odds ratio 0.75, P<.001; ≥250 miles, adjusted odds ratio 0.46; P=.002), all else being equal. The density level of radiation oncologists was not significantly associated with the receipt of RT. Patients who were female, nonwhite, and aged ≥50 years and had comorbidities were less likely to receive RT (P<.05). Patients who were uninsured but self-paid for their medical services, initially diagnosed elsewhere but treated at a reporting facility, and resided in Midwest had an increased the likelihood of receipt of RT (P<.05). An increased travel burden was associated with a decreased likelihood of receiving RT for patients with stage II/III rectal cancer, all else being equal; however, radiation oncologist density was not. Further research of geographic access and establishing transportation assistance programs or lodging services for patients with an unmet need might help decrease geographic barriers and improve the quality of rectal cancer care. Copyright © 2016 Elsevier Inc. All rights reserved.
The Determinants of Place of Death: An Evidence-Based Analysis
Costa, V
2014-01-01
Background According to a conceptual model described in this analysis, place of death is determined by an interplay of factors associated with the illness, the individual, and the environment. Objectives Our objective was to evaluate the determinants of place of death for adult patients who have been diagnosed with an advanced, life-limiting condition and are not expected to stabilize or improve. Data Sources A literature search was performed using Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, for studies published from January 1, 2004, to September 24, 2013. Review Methods Different places of death are considered in this analysis—home, nursing home, inpatient hospice, and inpatient palliative care unit, compared with hospital. We selected factors to evaluate from a list of possible predictors—i.e., determinants—of death. We extracted the adjusted odds ratios and 95% confidence intervals of each determinant, performed a meta-analysis if appropriate, and conducted a stratified analysis if substantial heterogeneity was observed. Results From a literature search yielding 5,899 citations, we included 2 systematic reviews and 29 observational studies. Factors that increased the likelihood of home death included multidisciplinary home palliative care, patient preference, having an informal caregiver, and the caregiver's ability to cope. Factors increasing the likelihood of a nursing home death included the availability of palliative care in the nursing home and the existence of advance directives. A cancer diagnosis and the involvement of home care services increased the likelihood of dying in an inpatient palliative care unit. A cancer diagnosis and a longer time between referral to palliative care and death increased the likelihood of inpatient hospice death. The quality of the evidence was considered low. Limitations Our results are based on those of retrospective observational studies. Conclusions The results obtained were consistent with previously published systematic reviews. The analysis identified several factors that are associated with place of death. PMID:26351550
Cosmological parameters from a re-analysis of the WMAP 7 year low-resolution maps
NASA Astrophysics Data System (ADS)
Finelli, F.; De Rosa, A.; Gruppuso, A.; Paoletti, D.
2013-06-01
Cosmological parameters from Wilkinson Microwave Anisotropy Probe (WMAP) 7 year data are re-analysed by substituting a pixel-based likelihood estimator to the one delivered publicly by the WMAP team. Our pixel-based estimator handles exactly intensity and polarization in a joint manner, allowing us to use low-resolution maps and noise covariance matrices in T, Q, U at the same resolution, which in this work is 3.6°. We describe the features and the performances of the code implementing our pixel-based likelihood estimator. We perform a battery of tests on the application of our pixel-based likelihood routine to WMAP publicly available low-resolution foreground-cleaned products, in combination with the WMAP high-ℓ likelihood, reporting the differences on cosmological parameters evaluated by the full WMAP likelihood public package. The differences are not only due to the treatment of polarization, but also to the marginalization over monopole and dipole uncertainties present in the WMAP pixel likelihood code for temperature. The credible central value for the cosmological parameters change below the 1σ level with respect to the evaluation by the full WMAP 7 year likelihood code, with the largest difference in a shift to smaller values of the scalar spectral index nS.
Reliability of Soft Tissue Model Based Implant Surgical Guides; A Methodological Mistake.
Sabour, Siamak; Dastjerdi, Elahe Vahid
2012-08-20
Abstract We were interested to read the paper by Maney P and colleagues published in the July 2012 issue of J Oral Implantol. The authors aimed to assess the reliability of soft tissue model based implant surgical guides reported that the accuracy was evaluated using software. 1 I found the manuscript title of Maney P, et al. incorrect and misleading. Moreover, they reported twenty-two sites (46.81%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). As the authors point out in their conclusion, Soft tissue models do not always provide sufficient accuracy for implant surgical guide fabrication.Reliability (precision) and validity (accuracy) are two different methodological issues in researches. Sensitivity, specificity, PPV, NPV, likelihood ratio positive (true positive/false negative) and likelihood ratio negative (false positive/ true negative) as well as odds ratio (true results\\false results - preferably more than 50) are among the tests to evaluate the validity (accuracy) of a single test compared to a gold standard.2-4 It is not clear that the reported twenty-two sites (46.81%) which were considered accurate related to which of the above mentioned estimates for validity analysis. Reliability (repeatability or reproducibility) is being assessed by different statistical tests such as Pearson r, least square and paired t.test which all of them are among common mistakes in reliability analysis 5. Briefly, for quantitative variable Intra Class Correlation Coefficient (ICC) and for qualitative variables weighted kappa should be used with caution because kappa has its own limitation too. Regarding reliability or agreement, it is good to know that for computing kappa value, just concordant cells are being considered, whereas discordant cells should also be taking into account in order to reach a correct estimation of agreement (Weighted kappa).2-4 As a take home message, for reliability and validity analysis, appropriate tests should be applied.
Clinical Diagnosis of Bordetella Pertussis Infection: A Systematic Review.
Ebell, Mark H; Marchello, Christian; Callahan, Maria
2017-01-01
Bordetella pertussis (BP) is a common cause of prolonged cough. Our objective was to perform an updated systematic review of the clinical diagnosis of BP without restriction by patient age. We identified prospective cohort studies of patients with cough or suspected pertussis and assessed study quality using QUADAS-2. We performed bivariate meta-analysis to calculate summary estimates of accuracy and created summary receiver operating characteristic curves to explore heterogeneity by vaccination status and age. Of 381 studies initially identified, 22 met our inclusion criteria, of which 14 had a low risk of bias. The overall clinical impression was the most accurate predictor of BP (positive likelihood ratio [LR+], 3.3; negative likelihood ratio [LR-], 0.63). The presence of whooping cough (LR+, 2.1) and posttussive vomiting (LR+, 1.7) somewhat increased the likelihood of BP, whereas the absence of paroxysmal cough (LR-, 0.58) and the absence of sputum (LR-, 0.63) decreased it. Whooping cough and posttussive vomiting have lower sensitivity in adults. Clinical criteria defined by the Centers for Disease Control and Prevention were sensitive (0.90) but nonspecific. Typical signs and symptoms of BP may be more sensitive but less specific in vaccinated patients. The clinician's overall impression was the most accurate way to determine the likelihood of BP infection when a patient initially presented. Clinical decision rules that combine signs, symptoms, and point-of-care tests have not yet been developed or validated. © Copyright 2017 by the American Board of Family Medicine.
Significance of parametric spectral ratio methods in detection and recognition of whispered speech
NASA Astrophysics Data System (ADS)
Mathur, Arpit; Reddy, Shankar M.; Hegde, Rajesh M.
2012-12-01
In this article the significance of a new parametric spectral ratio method that can be used to detect whispered speech segments within normally phonated speech is described. Adaptation methods based on the maximum likelihood linear regression (MLLR) are then used to realize a mismatched train-test style speech recognition system. This proposed parametric spectral ratio method computes a ratio spectrum of the linear prediction (LP) and the minimum variance distortion-less response (MVDR) methods. The smoothed ratio spectrum is then used to detect whispered segments of speech within neutral speech segments effectively. The proposed LP-MVDR ratio method exhibits robustness at different SNRs as indicated by the whisper diarization experiments conducted on the CHAINS and the cell phone whispered speech corpus. The proposed method also performs reasonably better than the conventional methods for whisper detection. In order to integrate the proposed whisper detection method into a conventional speech recognition engine with minimal changes, adaptation methods based on the MLLR are used herein. The hidden Markov models corresponding to neutral mode speech are adapted to the whispered mode speech data in the whispered regions as detected by the proposed ratio method. The performance of this method is first evaluated on whispered speech data from the CHAINS corpus. The second set of experiments are conducted on the cell phone corpus of whispered speech. This corpus is collected using a set up that is used commercially for handling public transactions. The proposed whisper speech recognition system exhibits reasonably better performance when compared to several conventional methods. The results shown indicate the possibility of a whispered speech recognition system for cell phone based transactions.
[Clinical examination and the Valsalva maneuver in heart failure].
Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A
2018-01-01
Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.
Parsons, Brendon A; Marney, Luke C; Siegler, W Christopher; Hoggard, Jamin C; Wright, Bob W; Synovec, Robert E
2015-04-07
Comprehensive two-dimensional (2D) gas chromatography coupled with time-of-flight mass spectrometry (GC × GC-TOFMS) is a versatile instrumental platform capable of collecting highly informative, yet highly complex, chemical data for a variety of samples. Fisher-ratio (F-ratio) analysis applied to the supervised comparison of sample classes algorithmically reduces complex GC × GC-TOFMS data sets to find class distinguishing chemical features. F-ratio analysis, using a tile-based algorithm, significantly reduces the adverse effects of chromatographic misalignment and spurious covariance of the detected signal, enhancing the discovery of true positives while simultaneously reducing the likelihood of detecting false positives. Herein, we report a study using tile-based F-ratio analysis whereby four non-native analytes were spiked into diesel fuel at several concentrations ranging from 0 to 100 ppm. Spike level comparisons were performed in two regimes: comparing the spiked samples to the nonspiked fuel matrix and to each other at relative concentration factors of two. Redundant hits were algorithmically removed by refocusing the tiled results onto the original high resolution pixel level data. To objectively limit the tile-based F-ratio results to only features which are statistically likely to be true positives, we developed a combinatorial technique using null class comparisons, called null distribution analysis, by which we determined a statistically defensible F-ratio cutoff for the analysis of the hit list. After applying null distribution analysis, spiked analytes were reliably discovered at ∼1 to ∼10 ppm (∼5 to ∼50 pg using a 200:1 split), depending upon the degree of mass spectral selectivity and 2D chromatographic resolution, with minimal occurrence of false positives. To place the relevance of this work among other methods in this field, results are compared to those for pixel and peak table-based approaches.
Tests for detecting overdispersion in models with measurement error in covariates.
Yang, Yingsi; Wong, Man Yu
2015-11-30
Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.
Maini, Rohit; Moscona, John; Katigbak, Paul; Fernandez, Camilo; Sidhu, Gursukhmandeep; Saleh, Qusai; Irimpen, Anand; Samson, Rohan; LeJemtel, Thierry
2017-12-27
Fractional flow reserve (FFR) remains underutilized due to practical concerns related to the need for hyperemic agents. These concerns have prompted the study of instantaneous wave-free ratio (iFR), a vasodilator-free index of coronary stenosis. Non-inferior cardiovascular outcomes have been demonstrated in two recent randomized clinic trials. We performed this meta-analysis to provide a necessary update of the diagnostic accuracy of iFR referenced to FFR based on the addition of eight more recent studies and 3727 more lesions. We searched the PubMed, EMBASE, Central, ProQuest, and Web of Science databases for full text articles published through May 31, 2017 to identify studies addressing the diagnostic accuracy of iFR referenced to FFR≤0.80. The following keywords were used: "instantaneous wave-free ratio" OR "iFR" AND "fractional flow reserve" OR "FFR." In total, 16 studies comprising 5756 lesions were identified. Pooled diagnostic accuracy estimates of iFR versus FFR≤0.80 were: sensitivity, 0.78 (95% CI, 0.76-0.79); specificity, 0.83 (0.81-0.84); positive likelihood ratio, 4.54 (3.85-5.35); negative likelihood ratio, 0.28 (0.24-0.32); diagnostic odds ratio, 17.38 (14.16-21.34); area under the summary receiver-operating characteristic curve, 0.87; and an overall diagnostic accuracy of 0.81 (0.78-0.84). In conclusion, iFR showed excellent agreement with FFR as a resting index of coronary stenosis severity without the undesired effects and cost of hyperemic agents. When considering along with its clinical outcome data and ease of application, the diagnostic accuracy of iFR supports its use as a suitable alternative to FFR for physiology-guided revascularization of moderate coronary stenoses. We performed a meta-analysis of the diagnostic accuracy of iFR referenced to FFR. iFR showed excellent agreement with FFR as a resting index of coronary stenosis severity without the undesired effects and cost of hyperemic agents. This supports its use as a suitable alternative to FFR for physiology-guided revascularization of moderate coronary stenoses. Copyright © 2017. Published by Elsevier Inc.
Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.
Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S
2011-02-01
This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.
[Accuracy of three methods for the rapid diagnosis of oral candidiasis].
Lyu, X; Zhao, C; Yan, Z M; Hua, H
2016-10-09
Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.
Recognition of depressive symptoms by physicians.
Henriques, Sergio Gonçalves; Fráguas, Renério; Iosifescu, Dan V; Menezes, Paulo Rossi; Lucia, Mara Cristina Souza de; Gattaz, Wagner Farid; Martins, Milton Arruda
2009-01-01
To investigate the recognition of depressive symptoms of major depressive disorder (MDD) by general practitioners. MDD is underdiagnosed in medical settings, possibly because of difficulties in the recognition of specific depressive symptoms. A cross-sectional study of 316 outpatients at their first visit to a teaching general hospital. We evaluated the performance of 19 general practitioners using Primary Care Evaluation of Mental Disorders (PRIME-MD) to detect depressive symptoms and compared them to 11 psychiatrists using Structured Clinical Interview Axis I Disorders, Patient Version (SCID I/P). We measured likelihood ratios, sensitivity, specificity, and false positive and false negative frequencies. The lowest positive likelihood ratios were for psychomotor agitation/retardation (1.6) and fatigue (1.7), mostly because of a high rate of false positive results. The highest positive likelihood ratio was found for thoughts of suicide (8.5). The lowest sensitivity, 61.8%, was found for impaired concentration. The sensitivity for worthlessness or guilt in patients with medical illness was 67.2% (95% CI, 57.4-76.9%), which is significantly lower than that found in patients without medical illness, 91.3% (95% CI, 83.2-99.4%). Less adequately identified depressive symptoms were both psychological and somatic in nature. The presence of a medical illness may decrease the sensitivity of recognizing specific depressive symptoms. Programs for training physicians in the use of diagnostic tools should consider their performance in recognizing specific depressive symptoms. Such procedures could allow for the development of specific training to aid in the detection of the most misrecognized depressive symptoms.
Diagnostic accuracy of history and physical examination in bacterial acute rhinosinusitis.
Autio, Timo J; Koskenkorva, Timo; Närkiö, Mervi; Leino, Tuomo K; Koivunen, Petri; Alho, Olli-Pekka
2015-07-01
To evaluate the diagnostic accuracy of symptoms, the symptom progression pattern, and clinical signs in identifying bacterial acute rhinosinusitis (ARS). We conducted an inception cohort study among 50 military recruits with ARS. We collected symptoms daily from the onset of symptoms to approximately 10 days. At 9 to 10 days, standardized data on symptoms and physical findings were gathered. A positive culture of maxillary sinus aspirate was considered to be the reference standard for bacterial ARS. At 9 to 10 days, the presence or deterioration after 5 days of any of the symptoms could not be used to diagnose bacterial ARS. Toothache had an adequate positive likelihood ratio (positive likelihood ratio [LR+] 4.4) but was too rare to be used for screening. In contrast, several physical findings at 9 to 10 days were of more diagnostic use and frequent enough for screening. Moderate or profuse (vs. none/minimal) amount of secretion in nasal passage seen in anterior rhinoscopy satisfactorily either ruled in, if present (LR+ 3.2), or ruled out, if absent (negative likelihood ratio 0.2), bacterial ARS. If any secretion was seen in the posterior pharynx or middle meatus, the probability of bacterial ARS increased markedly (LR+ 5.3 and LR+ 11.0, respectively). We found symptoms or their change to be of little use in identifying bacterial ARS. In contrast, we observed several clinical findings after 9 to 10 days of symptoms to predict bacterial ARS quite accurately. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
The Diagnostic Accuracy of Cytology for the Diagnosis of Hepatobiliary and Pancreatic Cancers.
Al-Hajeili, Marwan; Alqassas, Maryam; Alomran, Astabraq; Batarfi, Bashaer; Basunaid, Bashaer; Alshail, Reem; Alaydarous, Shahad; Bokhary, Rana; Mosli, Mahmoud
2018-06-13
Although cytology testing is considered a valuable method to diagnose tumors that are difficult to access such as hepato-biliary-pancreatic (HBP) malignancies, its diagnostic accuracy remains unclear. We therefore aimed to investigate the diagnostic accuracy of cytology testing for HBP tumors. We performed a retrospective study of all cytology samples that were used to confirm radiologically detected HBP tumors between 2002 and 2016. The cytology techniques used in our center included fine needle aspiration (FNA), brush cytology, and aspiration of bile. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios were calculated in comparison to histological confirmation. From a total of 133 medical records, we calculated an overall sensitivity of 76%, specificity of 74%, a negative likelihood ratio of 0.30, and a positive likelihood ratio of 2.9. Cytology was more accurate in diagnosing lesions of the liver (sensitivity 79%, specificity 57%) and biliary tree (sensitivity 100%, specificity 50%) compared to pancreatic (sensitivity 60%, specificity 83%) and gallbladder lesions (sensitivity 50%, specificity 85%). Cytology was more accurate in detecting primary cancers (sensitivity 77%, specificity 73%) when compared to metastatic cancers (sensitivity 73%, specificity 100%). FNA was the most frequently used cytological technique to diagnose HBP lesions (sensitivity 78.8%). Cytological testing is efficient in diagnosing HBP cancers, especially for hepatobiliary tumors. Given its relative simplicity, cost-effectiveness, and paucity of alternative diagnostic methods, cytology should still be considered as a first-line tool for diagnosing HBP malignancies. © 2018 S. Karger AG, Basel.
Kim, T J; Roesler, N M; von dem Knesebeck, O
2017-06-01
Numerous studies have investigated the association between education and overweight/obesity. Yet less is known about the relative importance of causation (i.e. the influence of education on risks of overweight/obesity) and selection (i.e. the influence of overweight/obesity on the likelihood to attain education) hypotheses. A systematic review was performed to assess the linkage between education and overweight/obesity in prospective studies in general populations. Studies were searched within five databases, and study quality was appraised with the Newcastle-Ottawa scale. In total, 31 studies were considered for meta-analysis. Regarding causation (24 studies), the lower educated had a higher likelihood (odds ratio: 1.33, 1.21-1.47) and greater risk (risk ratio: 1.34, 1.08-1.66) for overweight/obesity, when compared with the higher educated. However, these associations were no longer statistically significant when accounting for publication bias. Concerning selection (seven studies), overweight/obese individuals had a greater likelihood of lower education (odds ratio: 1.57, 1.10-2.25), when contrasted with the non-overweight or non-obese. Subgroup analyses were performed by stratifying meta-analyses upon different factors. Relationships between education and overweight/obesity were affected by study region, age groups, gender and observation period. In conclusion, it is necessary to consider both causation and selection processes in order to tackle educational inequalities in obesity appropriately. © 2017 World Obesity Federation.
Agrawal, Swati; Cerdeira, Ana Sofia; Redman, Christopher; Vatish, Manu
2018-02-01
Preeclampsia is a major cause of morbidity and mortality worldwide. Numerous candidate biomarkers have been proposed for diagnosis and prediction of preeclampsia. Measurement of maternal circulating angiogenesis biomarker as the ratio of sFlt-1 (soluble FMS-like tyrosine kinase-1; an antiangiogenic factor)/PlGF (placental growth factor; an angiogenic factor) reflects the antiangiogenic balance that characterizes incipient or overt preeclampsia. The ratio increases before the onset of the disease and thus may help in predicting preeclampsia. We conducted a meta-analysis to explore the predictive accuracy of sFlt-1/PlGF ratio in preeclampsia. We included 15 studies with 534 cases with preeclampsia and 19 587 controls. The ratio has a pooled sensitivity of 80% (95% confidence interval, 0.68-0.88), specificity of 92% (95% confidence interval, 0.87-0.96), positive likelihood ratio of 10.5 (95% confidence interval, 6.2-18.0), and a negative likelihood ratio of 0.22 (95% confidence interval, 0.13-0.35) in predicting preeclampsia in both high- and low-risk patients. Most of the studies have not made a distinction between early- and late-onset disease, and therefore, the analysis for it could not be done. It can prove to be a valuable screening tool for preeclampsia and may also help in decision-making, treatment stratification, and better resource allocation. © 2017 American Heart Association, Inc.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-11-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.
Allele-sharing models: LOD scores and accurate linkage tests.
Kong, A; Cox, N J
1997-01-01
Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested. PMID:9345087
Distribution of Model-based Multipoint Heterogeneity Lod Scores
Xing, Chao; Morris, Nathan; Xing, Guan
2011-01-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ2 approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating the distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution 12χ02+12χ12, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. PMID:21104892
Factors predicting a home death among home palliative care recipients
Ko, Ming-Chung; Huang, Sheng-Jean; Chen, Chu-Chieh; Chang, Yu-Ping; Lien, Hsin-Yi; Lin, Jia-Yi; Woung, Lin-Chung; Chan, Shang-Yih
2017-01-01
Abstract Awareness of factors affecting the place of death could improve communication between healthcare providers and patients and their families regarding patient preferences and the feasibility of dying in the preferred place. This study aimed to evaluate factors predicting home death among home palliative care recipients. This is a population-based study using a national representative sample retrieved from the National Health Insurance Research Database. Subjects receiving home palliative care, from 2010 to 2012, were analyzed to evaluate the association between a home death and various characteristics related to illness, individual, and health care utilization. A multiple-logistic regression model was used to assess the independent effect of various characteristics on the likelihood of a home death. The overall rate of a home death for home palliative care recipients was 43.6%. Age; gender; urbanization of the area where the patients lived; illness; the total number of home visits by all health care professionals; the number of home visits by nurses; utilization of nasogastric tube, endotracheal tube, or indwelling urinary catheter; the number of emergency department visits; and admission to intensive care unit in previous 1 year were not significantly associated with the risk of a home death. Physician home visits increased the likelihood of a home death. Compared with subjects without physician home visits (31.4%) those with 1 physician home visit (53.0%, adjusted odds ratio [AOR]: 3.23, 95% confidence interval [CI]: 1.93–5.42) and those with ≥2 physician home visits (43.9%, AOR: 2.23, 95% CI: 1.06–4.70) had higher likelihood of a home death. Compared with subjects with hospitalization 0 to 6 times in previous 1 year, those with hospitalization ≥7 times in previous 1 year (AOR: 0.57, 95% CI: 0.34–0.95) had lower likelihood of a home death. Among home palliative care recipients, physician home visits increased the likelihood of a home death. Hospitalizations ≥7 times in previous 1 year decreased the likelihood of a home death. PMID:29019887
Research on the strategy of underwater united detection fusion and communication using multi-sensor
NASA Astrophysics Data System (ADS)
Xu, Zhenhua; Huang, Jianguo; Huang, Hai; Zhang, Qunfei
2011-09-01
In order to solve the distributed detection fusion problem of underwater target detection, when the signal to noise ratio (SNR) of the acoustic channel is low, a new strategy for united detection fusion and communication using multiple sensors was proposed. The performance of detection fusion was studied and compared based on the Neyman-Pearson principle when the binary phase shift keying (BPSK) and on-off keying (OOK) modes were used by the local sensors. The comparative simulation and analysis between the optimal likelihood ratio test and the proposed strategy was completed, and both the theoretical analysis and simulation indicate that using the proposed new strategy could improve the detection performance effectively. In theory, the proposed strategy of united detection fusion and communication is of great significance to the establishment of an underwater target detection system.
Omar, Mohammad Ali; Laniado, Marc
2017-01-01
Introduction There are limited studies evaluating the 3 Incontinence Questionnaire (3IQ) against urodynamics based diagnosis as a reference standard. The 3IQ has been proposed to be useful to evaluate women at the level of primary care. The aim of this study was to determine correlation between 3IQ and video-urodynamics (VUDS) in diagnosing types of urinary incontinence. Material and methods Prospective data was collected on 200 consecutive female patients referred by primary care physicians for urinary incontinence. The mean age was 55 years (range 15–83 years). The patients were evaluated using the 3IQ and video-urodynamics. The 3IQ-based diagnosis of type of female urinary incontinence was compared to VUDS-based results. Sensitivity, specificity, positive likelihood ratios and positive predictive values were calculated. Results On 3IQ based self-evaluation, 28% of patients were classified as having stress urinary incontinence, 20% with urge incontinence and 40% with mixed incontinence. On video-urodynamics, urodynamic stress urinary incontinence (UDSUI) was detected in 56% of patients, detrusor overactivity (DO) in 15% and mixed urinary incontinence (MUI) in 19%. The 3IQ had a sensitivity and specificity respectively of 43% and 92% for UDSUI, 57% and 86% for DO and 58% and 64% for MUI. The corresponding positive likelihood ratios (CI, 95%) were 5.4 (CI 2.6 to 11.3) for stress urinary incontinence, 4.0 (CI 2.5 to 6.5) for DO and 1.62 (1.2 to 2.3) for MUI. The respective positive predictive values were 87% (CI 75% to 95%), 42% (CI 26% to 58%) and 28% (18% to 39%). Conclusions In our study population, stress urinary incontinence was reasonably well predicted by the 3IQ, but the questionnaire under-performed in the diagnoses of detrusor overactivity and mixed urinaryincontinence. PMID:29732212
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Gan, Fah Fatt; Tang, Xu; Zhu, Yexin; Lim, Puay Weng
2017-06-01
The traditional variable life-adjusted display (VLAD) is a graphical display of the difference between expected and actual cumulative deaths. The VLAD assumes binary outcomes: death within 30 days of an operation or survival beyond 30 days. Full recovery and bedridden for life, for example, are considered the same outcome. This binary classification results in a great loss of information. Although there are many grades of survival, the binary outcomes are commonly used to classify surgical outcomes. Consequently, quality monitoring procedures are developed based on binary outcomes. With a more refined set of outcomes, the sensitivities of these procedures can be expected to improve. A likelihood ratio method is used to define a penalty-reward scoring system based on three or more surgical outcomes for the new VLAD. The likelihood ratio statistic W is based on testing the odds ratio of cumulative probabilities of recovery R. Two methods of implementing the new VLAD are proposed. We accumulate the statistic W-W¯R to estimate the performance of a surgeon where W¯R is the average of the W's of a historical data set. The accumulated sum will be zero based on the historical data set. This ensures that if a new VLAD is plotted for a future surgeon of performance similar to this average performance, the plot will exhibit a horizontal trend. For illustration of the new VLAD, we consider 3-outcome surgical results: death within 30 days, partial and full recoveries. In our first illustration, we show the effect of partial recoveries on surgical results of a surgeon. In our second and third illustrations, the surgical results of two surgeons are compared using both the traditional VLAD based on binary-outcome data and the new VLAD based on 3-outcome data. A reversal in relative performance of surgeons is observed when the new VLAD is used. In our final illustration, we display the surgical results of four surgeons using the new VLAD based completely on 3-outcome data. Full recovery and bedridden for life are two completely different outcomes. There is a great loss of information when different grades of 'successful' operations are naively classified as survival. When surgical outcomes are classified more accurately into more than two categories, the resulting new VLAD will reveal more accurately and fairly the surgical results. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J
2013-01-01
Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.
Laurence, B.; Haywood, C; Lanzkron, S.
2014-01-01
The objective To determine if dental infections increase the likelihood of hospital admission among adult patients with sickle cell disease (SCD). Basic Research Design Cross-sectional analysis of data from the Nationwide Emergency Department Sample (NEDS) pooled for the years 2006 through 2008. Prevalence ratios (PR) for the effects of interest were estimated using Poisson regression with robust estimates of the variance. Participants Adults, aged 18 and over, diagnosed with SCD using ICD-9-CM codes excluding participants discharged with a code for sickle cell trait. Main outcome measure Emergency department (ED) visit disposition, dichotomised to represent whether or not the ED visit ended in admission versus being treated and released. Results Among patients having a sickle cell crisis, those with dental infections were 72% more likely to be admitted compared to those not having dental infections (PR=1.72, 95%CI 1.58-1.87). No association was observed among adult SCD patients not having a sickle crisis event. Based on preliminary data from this analysis, prevention of dental infection among patients with SCD could result in an estimated cost saving of $2.5 million dollars per year. Conclusions Having a dental infection complicated by a sickle cell crisis significantly increases the likelihood of hospital admission among adult SCD patients presenting to the ED. PMID:24151791
Pal, Suvra; Balakrishnan, N
2017-10-01
In this paper, we consider a competing cause scenario and assume the number of competing causes to follow a Conway-Maxwell Poisson distribution which can capture both over and under dispersion that is usually encountered in discrete data. Assuming the population of interest having a component cure and the form of the data to be interval censored, as opposed to the usually considered right-censored data, the main contribution is in developing the steps of the expectation maximization algorithm for the determination of the maximum likelihood estimates of the model parameters of the flexible Conway-Maxwell Poisson cure rate model with Weibull lifetimes. An extensive Monte Carlo simulation study is carried out to demonstrate the performance of the proposed estimation method. Model discrimination within the Conway-Maxwell Poisson distribution is addressed using the likelihood ratio test and information-based criteria to select a suitable competing cause distribution that provides the best fit to the data. A simulation study is also carried out to demonstrate the loss in efficiency when selecting an improper competing cause distribution which justifies the use of a flexible family of distributions for the number of competing causes. Finally, the proposed methodology and the flexibility of the Conway-Maxwell Poisson distribution are illustrated with two known data sets from the literature: smoking cessation data and breast cosmesis data.
Wakefield, M. A.; Spittal, M. J.; Yong, H-H.; Durkin, S. J.; Borland, R.
2011-01-01
Objective: To assess the extent to which intensity and timing of televised anti-smoking advertising emphasizing the serious harms of smoking influences quit attempts. Methods: Using advertising gross rating points (GRPs), we estimated exposure to tobacco control and nicotine replacement therapy (NRT) advertising in the 3, 4–6, 7–9 and 10–12 months prior to follow-up of a replenished cohort of 3037 Australian smokers during 2002–08. Using generalized estimating equations, we related the intensity and timing of advertising exposure from each source to the likelihood of making a quit attempt in the 3 months prior to follow-up. Results: Tobacco control advertising in the 3-month period prior to follow-up, but not in more distant past periods, was related to a higher likelihood of making a quit attempt. Each 1000 GRP increase per quarter was associated with an 11% increase in making a quit attempt [odds ratio (OR) = 1.11, 95% confidence interval (CI) 1.03–1.19, P = 0.009)]. NRT advertising was unrelated to quit attempts. Conclusions: Tobacco control advertising emphasizing the serious harms of smoking is associated with short-term increases in the likelihood of smokers making a quit attempt. Repeated cycles of higher intensity tobacco control media campaigns are needed to sustain high levels of quit attempts. PMID:21730252
Pan, Chen-Wei; Liu, Hu; Sun, Hong-Peng; Xu, Yong
2015-01-01
Managing stairs is a challenging aspect of daily activities of living for older people. We assessed whether older adults with visual impairment (VI) have greater difficulties of managing stairs in daily lives. The study was designed as a community-based cross-sectional study based on a Chinese cohort aged 60 years and older in rural China. Visual acuity (VA) was measured in both eyes using a retro-illuminated Snellen chart with tumbling-E optotypes. VI (including blindness) was defined as presenting VA of worse than 20/60 in either eye. Having any difficulties in managing stairs was self-reported based on a question drawn from the Barthel Index. Information on participants' socioeconomic status, lifestyle-related factors, diseases histories and medication intake was collected using a questionnaire. The Barthel Index, Activities of Daily Living questionnaire was completed by 4597 (99.7%) participants including 2218 men and 2379 women. The age of the participants ranged from 60 to 93 years with a mean of 67.6 ± 6.3 years. In age and gender adjusted models, adults with VI had a higher likelihood of having difficulties in managing stairs (odds ratio [OR] = 2.7; 95% confidence interval [CI] 2.0, 3.7) compared with those without. The association of VI with the likelihood of having difficulties in managing stairs was stronger in older adults who lived alone (OR = 3.2; 95%CI 1.8, 4.5) compared with those who lived with other family members (OR = 2.0; 95%CI 1.3, 4.3). Compared with hypertension, diabetes, obesity and cognitive dysfunction, VI had the greatest impact on people's abilities of managing stairs. VI was associated with an increased likelihood of having difficulties in managing stairs, especially in those who lived alone. However, whether the finding could be extrapolated to other populations warrants further studies as different environmental exposures such as illumination and types of stairs may alter the association observed in this study.
Diagnostic value of 3D time-of-flight MRA in trigeminal neuralgia.
Cai, Jing; Xin, Zhen-Xue; Zhang, Yu-Qiang; Sun, Jie; Lu, Ji-Liang; Xie, Feng
2015-08-01
The aim of this meta-analysis was to evaluate the diagnostic value of 3D time-of-flight magnetic resonance angiography (3D-TOF-MRA) in trigeminal neuralgia (TN). Relevant studies were identified by computerized database searches supplemented by manual search strategies. The studies were included in accordance with stringent inclusion and exclusion criteria. Following a multistep screening process, high quality studies related to the diagnostic value of 3D-TOF-MRA in TN were selected for meta-analysis. Statistical analyses were conducted using Statistical Analysis Software (version 8.2; SAS Institute, Cary, NC, USA) and Meta Disc (version 1.4; Unit of Clinical Biostatistics, Ramon y Cajal Hospital, Madrid, Spain). For the present meta-analysis, we initially retrieved 95 studies from database searches. A total of 13 studies were eventually enrolled containing a combined total of 1084 TN patients. The meta-analysis results demonstrated that the sensitivity and specificity of the diagnostic value of 3D-TOF-MRA in TN were 95% (95% confidence interval [CI] 0.93-0.96) and 77% (95% CI 0.66-0.86), respectively. The pooled positive likelihood ratio and negative likelihood ratio were 2.72 (95% CI 1.81-4.09) and 0.08 (95% CI 0.06-0.12), respectively. The pooled diagnostic odds ratio of 3D-TOF-MRA in TN was 52.92 (95% CI 26.39-106.11), and the corresponding area under the curve in the summary receiver operating characteristic curve based on the 3D-TOF-MRA diagnostic image of observers was 0.9695 (standard error 0.0165). Our results suggest that 3D-TOF-MRA has excellent sensitivity and specificity as a diagnostic tool for TN, and that it can accurately identify neurovascular compression in TN patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dhooria, Sahajal; Aggarwal, Ashutosh N; Gupta, Dheeraj; Behera, Digambar; Agarwal, Ritesh
2015-07-01
The use of endoscopic ultrasound with bronchoscope-guided fine-needle aspiration (EUS-B-FNA) has been described in the evaluation of mediastinal lymphadenopathy. Herein, we conduct a meta-analysis to estimate the overall diagnostic yield and safety of EUS-B-FNA combined with endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), in the diagnosis of mediastinal lymphadenopathy. The PubMed and EmBase databases were searched for studies reporting the outcomes of EUS-B-FNA in diagnosis of mediastinal lymphadenopathy. The study quality was assessed using the QualSyst tool. The yield of EBUS-TBNA alone and the combined procedure (EBUS-TBNA and EUS-B-FNA) were analyzed by calculating the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each study, and pooling the study results using a random effects model. Heterogeneity and publication bias were assessed for individual outcomes. The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was calculated using proportion meta-analysis. Our search yielded 10 studies (1,080 subjects with mediastinal lymphadenopathy). The sensitivity of the combined procedure was significantly higher than EBUS-TBNA alone (91% vs 80%, P = .004), in staging of lung cancer (4 studies, 465 subjects). The additional diagnostic gain of EUS-B-FNA over EBUS-TBNA was 7.6% in the diagnosis of mediastinal adenopathy. No serious complication of EUS-B-FNA procedure was reported. Clinical and statistical heterogeneity was present without any evidence of publication bias. Combining EBUS-TBNA and EUS-B-FNA is an effective and safe method, superior to EBUS-TBNA alone, in the diagnosis of mediastinal lymphadenopathy. Good quality randomized controlled trials are required to confirm the results of this systematic review. Copyright © 2015 by Daedalus Enterprises.
Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.
Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong
2018-04-01
Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.
Zhang, Xia; Zhou, Jian-Guo; Wu, Hua-Lian; Ma, Hu; Jiang, Zhi-Xia
2017-01-01
Background Anaplastic lymphoma kinase (ALK) gene fusion has been reported in 3∼5% non-small cell lung carcinoma (NSCLC) patients, and polymerase chain reaction (PCR) is commonly used to detecting the gene status, but the diagnostic capacity of it is still controversial. A systematic review and meta-analysis was conducted to clarify the diagnostic accuracy of PCR for detecting ALK gene rearrangement in NSCLC patients. Results 18 articles were enrolled, which included 21 studies, involving 2800 samples from NSCLC patients. The overall pooled parameters were calculated: sensitivity was 92.4% [95% confidence interval (CI): 82.2%–97.0%], specificity was 97.8% [95% CI: 95.1%–99.0%], PLR was 41.51 [95% CI: 18.10–95.22], NLR was 0.08 [95% CI: 0.03–0.19], DOR was 535.72 [95% CI: 128.48–2233.79], AUROC was 0.99 [95% CI: 0.98–1.00]. Materials and Methods Relevant articles were searched from PubMed, EMBASE, Web of Science, Cochrane library, American Society of Clinical Oncology (ASCO), European Society for Medical Oncology (ESMO), China National Knowledge Infrastructure (CNKI), China Wan Fang databases and Chinese biomedical literature database (CBM). Diagnostic capacity of PCR test was assessed by the pooled sensitivity and specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), area under the summary receiver operating characteristic (AUROC). Conclusions Based on the results from this review, PCR has good diagnostic performance for detecting the ALK gene fusion in NSCLC patients. Moreover, due to the poor methodology quality of the enrolled trials, more well-designed multi-center trials should be performed. PMID:29088875
Rodríguez-Cortés, Alhelí; Ojeda, Ana; Todolí, Felicitat; Alberola, Jordi
2013-01-31
Leishmania infantum (syn. Leishmania chagasi) is the etiological agent of a widespread serious zoonotic disease that affects both humans and dogs. Prevalence and incidence of the canine infection are important parameters to determine the risk and the ways to control this reemergent zoonosis. Unfortunately, there is not a gold standard test for Leishmania infection. Our aim was to assess the operative validity of commercial tests used to detect antibodies to Leishmania in serum samples from experimental infections. Three ELISA tests (LEISCAN(®) Leishmania ELISA Test, INGEZIM(®) LEISHMANIA, and INGEZIM(®) LEISHMANIA VET), three immunochromatographic tests (INGEZIM(®) LEISHMACROM, SNAP(®) Leishmania, and WITNESS(®) Leishmania), and one IFAT were evaluated. LEISCAN(®) Leishmania ELISA test achieved the highest sensitivity and accuracy (both 0.98). Specificity was 1 for all tests except for IFAT. All tests but IFAT obtained a positive predictive value of 1, while the maximum negative predictive value was achieved by LEISCAN(®) Leishmania ELISA Test (0.93). The best positive likelihood ratio was obtained by INGEZIM(®) LEISHMANIA VET (30.26), while the best negative likelihood ratio was obtained by LEISCAN(®) Leishmania ELISA Test (0.02). The highest diagnostic odds ratio was achieved by LEISCAN(®) Leishmania ELISA Test (729.00). The largest area under the ROC curve was obtained by LEISCAN(®) Leishmania ELISA Test (0.981). Quantitative ELISA based tests performmed better than qualitative tests ("Rapid Tests"), and the test best suited to detect Leishmania in infected dogs and to provide clinically useful information was LEISCAN(®) Leishmania ELISA Test. This and other results point also to the need of revising the status of IFAT as a gold standard for the diagnosis of leishmaniasis. Copyright © 2012 Elsevier B.V. All rights reserved.
Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya
2017-04-01
Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.
Haughton, Jannett; Gregorio, David; Pérez-Escamilla, Rafael
2011-01-01
This retrospective study aimed to identify factors associated with breastfeeding duration among women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) of Hartford, Connecticut. The authors included mothers whose children were younger than 5 years and had stopped breastfeeding (N = 155). Women who had planned their pregnancies were twice as likely as those who did not plan them to breastfeed for more than 6 months (odds ratio, 2.15; 95% confidence interval, 1.00–4.64). One additional year of maternal age was associated with a 9% increase on the likelihood of breastfeeding for more than 6 months (odds ratio, 1.09; 95% confidence interval, 1.02–1.17). Time in the United States was inversely associated with the likelihood of breastfeeding for more than 6 months (odds ratio, 0.96; 95% confidence interval, 0.92–0.99). Return to work, sore nipples, lack of access to breast pumps, and free formula provided by WIC were identified as breastfeeding barriers. Findings can help WIC improve its breastfeeding promotion efforts. PMID:20689103
Grosu, Horiana B; Vial-Rodriguez, Macarena; Vakil, Erik; Casal, Roberto F; Eapen, George A; Morice, Rodolfo; Stewart, John; Sarkiss, Mona G; Ost, David E
2017-08-01
During diagnostic thoracoscopy, talc pleurodesis after biopsy is appropriate if the probability of malignancy is sufficiently high. Findings on direct visual assessment of the pleura during thoracoscopy, rapid onsite evaluation (ROSE) of touch preparations (touch preps) of thoracoscopic biopsy specimens, and preoperative imaging may help predict the likelihood of malignancy; however, data on the performance of these methods are limited. To assess the performance of ROSE of touch preps, direct visual assessment of the pleura during thoracoscopy, and preoperative imaging in diagnosing malignancy. Patients who underwent ROSE of touch preps during thoracoscopy for suspected malignancy were retrospectively reviewed. Malignancy was diagnosed on the basis of final pathologic examination of pleural biopsy specimens. ROSE results were categorized as malignant, benign, or atypical cells. Visual assessment results were categorized as tumor studding present or absent. Positron emission tomography (PET) and computed tomography (CT) findings were categorized as abnormal or normal pleura. Likelihood ratios were calculated for each category of test result. The study included 44 patients, 26 (59%) with a final pathologic diagnosis of malignancy. Likelihood ratios were as follows: for ROSE of touch preps: malignant, 1.97 (95% confidence interval [CI], 0.90-4.34); atypical cells, 0.69 (95% CI, 0.21-2.27); benign, 0.11 (95% CI, 0.01-0.93); for direct visual assessment: tumor studding present, 3.63 (95% CI, 1.32-9.99); tumor studding absent, 0.24 (95% CI, 0.09-0.64); for PET: abnormal pleura, 9.39 (95% CI, 1.42-62); normal pleura, 0.24 (95% CI, 0.11-0.52); and for CT: abnormal pleura, 13.15 (95% CI, 1.93-89.63); normal pleura, 0.28 (95% CI, 0.15-0.54). A finding of no malignant cells on ROSE of touch preps during thoracoscopy lowers the likelihood of malignancy significantly, whereas finding of tumor studding on direct visual assessment during thoracoscopy only moderately increases the likelihood of malignancy. A positive finding on PET and/or CT increases the likelihood of malignancy significantly in a moderate-risk patient group and can be used as an adjunct to predict malignancy before pleurodesis.
Statistical inference for tumor growth inhibition T/C ratio.
Wu, Jianrong
2010-09-01
The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.
Paninski, Liam; Haith, Adrian; Szirtes, Gabor
2008-02-01
We recently introduced likelihood-based methods for fitting stochastic integrate-and-fire models to spike train data. The key component of this method involves the likelihood that the model will emit a spike at a given time t. Computing this likelihood is equivalent to computing a Markov first passage time density (the probability that the model voltage crosses threshold for the first time at time t). Here we detail an improved method for computing this likelihood, based on solving a certain integral equation. This integral equation method has several advantages over the techniques discussed in our previous work: in particular, the new method has fewer free parameters and is easily differentiable (for gradient computations). The new method is also easily adaptable for the case in which the model conductance, not just the input current, is time-varying. Finally, we describe how to incorporate large deviations approximations to very small likelihoods.
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-01-01
Objective This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. Design We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Results Advanced colorectal neoplasia was detected in 2544 of the 35 918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7–8. Conclusions Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. PMID:24385598
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-07-01
This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Goodman, Steven N.
1989-11-01
This dissertation explores the use of a mathematical measure of statistical evidence, the log likelihood ratio, in clinical trials. The methods and thinking behind the use of an evidential measure are contrasted with traditional methods of analyzing data, which depend primarily on a p-value as an estimate of the statistical strength of an observed data pattern. It is contended that neither the behavioral dictates of Neyman-Pearson hypothesis testing methods, nor the coherency dictates of Bayesian methods are realistic models on which to base inference. The use of the likelihood alone is applied to four aspects of trial design or conduct: the calculation of sample size, the monitoring of data, testing for the equivalence of two treatments, and meta-analysis--the combining of results from different trials. Finally, a more general model of statistical inference, using belief functions, is used to see if it is possible to separate the assessment of evidence from our background knowledge. It is shown that traditional and Bayesian methods can be modeled as two ends of a continuum of structured background knowledge, methods which summarize evidence at the point of maximum likelihood assuming no structure, and Bayesian methods assuming complete knowledge. Both schools are seen to be missing a concept of ignorance- -uncommitted belief. This concept provides the key to understanding the problem of sampling to a foregone conclusion and the role of frequency properties in statistical inference. The conclusion is that statistical evidence cannot be defined independently of background knowledge, and that frequency properties of an estimator are an indirect measure of uncommitted belief. Several likelihood summaries need to be used in clinical trials, with the quantitative disparity between summaries being an indirect measure of our ignorance. This conclusion is linked with parallel ideas in the philosophy of science and cognitive psychology.
Bayesian framework for the evaluation of fiber evidence in a double murder--a case report.
Causin, Valerio; Schiavone, Sergio; Marigo, Antonio; Carresi, Pietro
2004-05-10
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.
Xiong, Yi-Quan; Ma, Shu-Juan; Zhou, Jun-Hua; Zhong, Xue-Shan; Chen, Qing
2016-06-01
Barrett's esophagus (BE) is considered the most important risk factor for development of esophageal adenocarcinoma. Confocal laser endomicroscopy (CLE) is a recently developed technique used to diagnose neoplasia in BE. This meta-analysis was performed to assess the accuracy of CLE for diagnosis of neoplasia in BE. We searched EMBASE, PubMed, Cochrane Library, and Web of Science to identify relevant studies for all articles published up to June 27, 2015 in English. The quality of included studies was assessed using QUADAS-2. Per-patient and per-lesion pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated. In total, 14 studies were included in the final analysis, covering 789 patients with 4047 lesions. Seven studies were included in the per-patient analysis. Pooled sensitivity and specificity were 89% (95% CI: 0.82-0.94) and 83% (95% CI: 0.78-0.86), respectively. Ten studies were included in the per-lesion analysis. Compared with the PP analysis, the corresponding pooled sensitivity declined to 77% (95% CI: 0.73-0.81) and specificity increased to 89% (95% CI: 0.87-0.90). Subgroup analysis showed that probe-based CLE (pCLE) was superior to endoscope-based CLE (eCLE) in pooled specificity [91.4% (95% CI: 89.7-92.9) vs 86.1% (95% CI: 84.3-87.8)] and AUC for the sROC (0.885 vs 0.762). Confocal laser endomicroscopy is a valid method to accurately differentiate neoplasms from non-neoplasms in BE. It can be applied to BE surveillance and early diagnosis of esophageal adenocarcinoma. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
A likelihood ratio model for the determination of the geographical origin of olive oil.
Własiuk, Patryk; Martyna, Agnieszka; Zadora, Grzegorz
2015-01-01
Food fraud or food adulteration may be of forensic interest for instance in the case of suspected deliberate mislabeling. On account of its potential health benefits and nutritional qualities, geographical origin determination of olive oil might be of special interest. The use of a likelihood ratio (LR) model has certain advantages in contrast to typical chemometric methods because the LR model takes into account the information about the sample rarity in a relevant population. Such properties are of particular interest to forensic scientists and therefore it has been the aim of this study to examine the issue of olive oil classification with the use of different LR models and their pertinence under selected data pre-processing methods (logarithm based data transformations) and feature selection technique. This was carried out on data describing 572 Italian olive oil samples characterised by the content of 8 fatty acids in the lipid fraction. Three classification problems related to three regions of Italy (South, North and Sardinia) have been considered with the use of LR models. The correct classification rate and empirical cross entropy were taken into account as a measure of performance of each model. The application of LR models in determining the geographical origin of olive oil has proven to be satisfactorily useful for the considered issues analysed in terms of many variants of data pre-processing since the rates of correct classifications were close to 100% and considerable reduction of information loss was observed. The work also presents a comparative study of the performance of the linear discriminant analysis in considered classification problems. An approach to the choice of the value of the smoothing parameter is highlighted for the kernel density estimation based LR models as well. Copyright © 2014 Elsevier B.V. All rights reserved.
How well do commonly used data presentation formats support comparative effectiveness evaluations?
Dolan, James G.; Qian, Feng; Veazie, Peter J.
2012-01-01
Background Good decisions depend on an accurate understanding of the comparative effectiveness of decision alternatives. The best way convey data needed to support these comparisons is unknown. Objective To determine how well five commonly used data presentation formats convey comparative effectiveness information. Design Internet survey using a factorial design. Subjects 279 members of an online survey panel. Intervention Study participants compared outcomes associated with three hypothetical screening test options relative to five possible outcomes with probabilities ranging from 2 per 5,000 (0.04%) to 500 per 1,000 (50%). Data presentation formats included a table, a “magnified” bar chart, a risk scale, a frequency diagram, and an icon array. Measurements Outcomes included the number of correct ordinal judgments regarding the more likely of two outcomes, the ratio of perceived versus actual relative likelihoods of the paired outcomes, the inter-subject consistency of responses, and perceived clarity. Results The mean number of correct ordinal judgments was 12 of 15 (80%), with no differences among data formats. On average, there was a 3.3-fold difference between perceived and actual likelihood ratios,95%CI: 3.0 to 3.6. Comparative judgments based on flow charts, icon arrays, and tables were all significantly more accurate and consistent than those based on risk scales and bar charts, p < 0.001. The most clearly perceived formats were the table and the flow chart. Low subjective numeracy was associated with less accurate and more variable data interpretations and lower perceived clarity for icon displays, bar charts, and flow diagrams. Conclusions None of the data presentation formats studied can reliably provide patients, especially those with low subjective numeracy, with an accurate understanding of comparative effectiveness information. PMID:22618998
How well do commonly used data presentation formats support comparative effectiveness evaluations?
Dolan, James G; Qian, Feng; Veazie, Peter J
2012-01-01
Good decisions depend on an accurate understanding of the comparative effectiveness of decision alternatives. The best way to convey data needed to support these comparisons is unknown. To determine how well 5 commonly used data presentation formats convey comparative effectiveness information. The study was an Internet survey using a factorial design. Participants consisted of 279 members of an online survey panel. Study participants compared outcomes associated with 3 hypothetical screening test options relative to 5 possible outcomes with probabilities ranging from 2 per 5000 (0.04%) to 500 per 1000 (50%). Data presentation formats included a table, a "magnified" bar chart, a risk scale, a frequency diagram, and an icon array. Outcomes included the number of correct ordinal judgments regarding the more likely of 2 outcomes, the ratio of perceived versus actual relative likelihoods of the paired outcomes, the intersubject consistency of responses, and perceived clarity. The mean number of correct ordinal judgments was 12 of 15 (80%), with no differences among data formats. On average, there was a 3.3-fold difference between perceived and actual likelihood ratios (95% confidence interval = 3.0-3.6). Comparative judgments based on flowcharts, icon arrays, and tables were all significantly more accurate and consistent than those based on risk scales and bar charts (P < 0.001). The most clearly perceived formats were the table and the flowchart. Low subjective numeracy was associated with less accurate and more variable data interpretations and lower perceived clarity for icon displays, bar charts, and flow diagrams. None of the data presentation formats studied can reliably provide patients, especially those with low subjective numeracy, with an accurate understanding of comparative effectiveness information.
A mixture model-based approach to the clustering of microarray expression data.
McLachlan, G J; Bean, R W; Peel, D
2002-03-01
This paper introduces the software EMMIX-GENE that has been developed for the specific purpose of a model-based approach to the clustering of microarray expression data, in particular, of tissue samples on a very large number of genes. The latter is a nonstandard problem in parametric cluster analysis because the dimension of the feature space (the number of genes) is typically much greater than the number of tissues. A feasible approach is provided by first selecting a subset of the genes relevant for the clustering of the tissue samples by fitting mixtures of t distributions to rank the genes in order of increasing size of the likelihood ratio statistic for the test of one versus two components in the mixture model. The imposition of a threshold on the likelihood ratio statistic used in conjunction with a threshold on the size of a cluster allows the selection of a relevant set of genes. However, even this reduced set of genes will usually be too large for a normal mixture model to be fitted directly to the tissues, and so the use of mixtures of factor analyzers is exploited to reduce effectively the dimension of the feature space of genes. The usefulness of the EMMIX-GENE approach for the clustering of tissue samples is demonstrated on two well-known data sets on colon and leukaemia tissues. For both data sets, relevant subsets of the genes are able to be selected that reveal interesting clusterings of the tissues that are either consistent with the external classification of the tissues or with background and biological knowledge of these sets. EMMIX-GENE is available at http://www.maths.uq.edu.au/~gjm/emmix-gene/
2014-01-01
Background Fractal geometry has been the basis for the development of a diagnosis of preneoplastic and neoplastic cells that clears up the undetermination of the atypical squamous cells of undetermined significance (ASCUS). Methods Pictures of 40 cervix cytology samples diagnosed with conventional parameters were taken. A blind study was developed in which the clinic diagnosis of 10 normal cells, 10 ASCUS, 10 L-SIL and 10 H-SIL was masked. Cellular nucleus and cytoplasm were evaluated in the generalized Box-Counting space, calculating the fractal dimension and number of spaces occupied by the frontier of each object. Further, number of pixels occupied by surface of each object was calculated. Later, the mathematical features of the measures were studied to establish differences or equalities useful for diagnostic application. Finally, the sensibility, specificity, negative likelihood ratio and diagnostic concordance with Kappa coefficient were calculated. Results Simultaneous measures of the nuclear surface and the subtraction between the boundaries of cytoplasm and nucleus, lead to differentiate normality, L-SIL and H-SIL. Normality shows values less than or equal to 735 in nucleus surface and values greater or equal to 161 in cytoplasm-nucleus subtraction. L-SIL cells exhibit a nucleus surface with values greater than or equal to 972 and a subtraction between nucleus-cytoplasm higher to 130. L-SIL cells show cytoplasm-nucleus values less than 120. The rank between 120–130 in cytoplasm-nucleus subtraction corresponds to evolution between L-SIL and H-SIL. Sensibility and specificity values were 100%, the negative likelihood ratio was zero and Kappa coefficient was equal to 1. Conclusions A new diagnostic methodology of clinic applicability was developed based on fractal and euclidean geometry, which is useful for evaluation of cervix cytology. PMID:24742118
Contemporary management of men with high-risk localized prostate cancer in the United States.
Weiner, A B; Matulewicz, R S; Schaeffer, E M; Liauw, S L; Feinglass, J M; Eggener, S E
2017-09-01
Surgery and radiation-based therapies are standard management options for men with clinically localized high-risk prostate cancer (PCa). Contemporary patterns of care are unknown. We hypothesize the use of surgery has steadily increased in more recent years. Using the National Cancer Data Base for 2004-2013, all men diagnosed with high-risk localized PCa were identified using National Comprehensive Cancer Network criteria. Temporal trends in initial management were assessed. Multivariable logistic regression was used to evaluate demographic and clinical factors associated with undergoing radical prostatectomy (RP). In total, 127 391 men were identified. Use of RP increased from 26% in 2004 to 42% in 2013 (adjusted risk ratio (RR) 1.51, 95% CI 1.42-1.60, P<0.001), while external beam radiation therapy (EBRT) decreased from 49% to 42% (P<0.001). African American men had lower odds of undergoing RP (unadjusted rate of 28%, adjusted RR 0.69, 95% CI 0.66-0.72, <0.001) compared to White men (37%). Age was inversely associated with likelihood of receiving RP. Having private insurance was significantly associated with the increased use of RP (vs Medicare, adjusted odds ratio 1.04, 95% CI 1.01-1.08, P=0.015). Biopsy Gleason scores 8-10 with and without any primary Gleason 5 pattern were associated with decreased odds of RP (vs Gleason score ⩽6, both P<0.001). Academic and comprehensive cancer centers were more likely to perform RP compared to community hospitals (both P<0.001). The likelihood of receiving RP for high-risk PCa dramatically increased from 2004 to 2013. By 2013, the use of RP and EBRT were similar. African American men, elderly men and those without private insurance were less likely to receive RP.
Man, Wanrong; Hu, Jianqiang; Zhao, Zhijing; Zhang, Mingming; Wang, Tingting; Lin, Jie; Duan, Yu; Wang, Ling; Wang, Haichang; Sun, Dongdong; Li, Yan
2016-09-01
The instantaneous wave-free ratio (iFR) is a new vasodilator-free index of coronary stenosis severity. The aim of this meta-analysis is to assess the diagnostic performance of iFR for the evaluation of coronary stenosis severity with fractional flow reserve as standard reference. We searched PubMed, EMBASE, CENTRAL, ProQuest, Web of Science, and International Clinical Trials Registry Platform (ICTRP) for publications concerning the diagnostic value of iFR. We used a random-effects covariate to synthesize the available data of sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), and diagnostic odds ratio (DOR). Overall test performance was summarized by the summary receiver operating characteristic curve (sROC) and the area under the curve (AUC). Eight studies with 1611 subjects were included in the meta-analysis. The pooled sensitivity, specificity, LR+, LR-, and DOR for iFR were respectively 73.3% (70.1-76.2%), 86.4% (84.3-88.3%), 5.71 (4.43-7.37), 0.29 (0.22-0.38), and 20.54 (16.11-26.20). The area under the summary receiver operating characteristic curves for iFR was 0.8786. No publication bias was identified. The available evidence suggests that iFR may be a new, simple, and promising technology for coronary stenosis physiological assessment.
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Liu, Fang; Eugenio, Evercita C
2018-04-01
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
Multibaseline gravitational wave radiometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit
2011-03-15
We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less
Accuracy of Urine Color to Detect Equal to or Greater Than 2% Body Mass Loss in Men.
McKenzie, Amy L; Muñoz, Colleen X; Armstrong, Lawrence E
2015-12-01
Clinicians and athletes can benefit from field-expedient measurement tools, such as urine color, to assess hydration state; however, the diagnostic efficacy of this tool has not been established. To determine the diagnostic accuracy of urine color assessment to distinguish a hypohydrated state (≥2% body mass loss [BML]) from a euhydrated state (<2% BML) after exercise in a hot environment. Controlled laboratory study. Environmental chamber in a laboratory. Twenty-two healthy men (age = 22 ± 3 years, height = 180.4 ± 8.7 cm, mass = 77.9 ± 12.8 kg, body fat = 10.6% ± 4.6%). Participants cycled at 68% ± 6% of their maximal heart rates in a hot environment (36°C ± 1°C) for 5 hours or until 5% BML was achieved. At the point of each 1% BML, we assessed urine color. Diagnostic efficacy of urine color was assessed using receiver operating characteristic curve analysis, sensitivity, specificity, and likelihood ratios. Urine color was useful as a diagnostic tool to identify hypohydration after exercise in the heat (area under the curve = 0.951, standard error = 0.022; P < .001). A urine color of 5 or greater identified BML ≥2% with 88.9% sensitivity and 84.8% specificity (positive likelihood ratio = 5.87, negative likelihood ratio = 0.13). Under the conditions of acute dehydration due to exercise in a hot environment, urine color assessment can be a valid, practical, inexpensive tool for assessing hydration status. Researchers should examine the utility of urine color to identify a hypohydrated state under different BML conditions.
Wongwai, Phanthipha; Anupongongarch, Pacharapan; Suwannaraj, Sirinya; Asawaphureekorn, Somkiat
2016-08-01
To evaluate the prevalence of visual impairment of children aged four to six years in Khon Kaen City Municipality, Thailand. The visual acuity test was performed on 1,286 children in kindergarten schools located in Khon Kaen Municipality. The first test of visual acuity was done by trained teachers and the second test by the pediatric ophthalmologist. The prevalence of visual impairment of both tests was recorded including sensitivity, specificity, likelihood ratio, and predictive value of the test by teachers. The causes of visual impairment were also recorded. There were 39 children with visual impairment from the test by the teacher and 12 children from the test by the ophthalmologist. Myopia is the single cause of visual impairment. Mean spherical equivalence is 1.375 diopters (SD = 0.53). Median spherical equivalence is 1.375 diopters (minimum = 0.5, maximum =4). The detection of visual impairment by trained teachers had a sensitivity of 1.00 (95% CI 0.76-1.00), specificity of 0.98 (95% CI 0.97-0.99), likelihood ratio for a positive test 44.58 (95% CI 30.32-65.54), likelihood ratio for a negative test 0.04 (95% CI 0.003-0.60), positive predictive value of 0.31 (95% CI 0.19-0.47), and negative predictive value of 1.00 (95% CI 0.99-1.00). The prevalence of visual impairment among children aged four to six year old is 0.9%. Trained teachers can be examiners for screening purpose.
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
A data fusion approach to indications and warnings of terrorist attacks
NASA Astrophysics Data System (ADS)
McDaniel, David; Schaefer, Gregory
2014-05-01
Indications and Warning (I&W) of terrorist attacks, particularly IED attacks, require detection of networks of agents and patterns of behavior. Social Network Analysis tries to detect a network; activity analysis tries to detect anomalous activities. This work builds on both to detect elements of an activity model of terrorist attack activity - the agents, resources, networks, and behaviors. The activity model is expressed as RDF triples statements where the tuple positions are elements or subsets of a formal ontology for activity models. The advantage of a model is that elements are interdependent and evidence for or against one will influence others so that there is a multiplier effect. The advantage of the formality is that detection could occur hierarchically, that is, at different levels of abstraction. The model matching is expressed as a likelihood ratio between input text and the model triples. The likelihood ratio is designed to be analogous to track correlation likelihood ratios common in JDL fusion level 1. This required development of a semantic distance metric for positive and null hypotheses as well as for complex objects. The metric uses the Web 1Terabype database of one to five gram frequencies for priors. This size requires the use of big data technologies so a Hadoop cluster is used in conjunction with OpenNLP natural language and Mahout clustering software. Distributed data fusion Map Reduce jobs distribute parts of the data fusion problem to the Hadoop nodes. For the purposes of this initial testing, open source models and text inputs of similar complexity to terrorist events were used as surrogates for the intended counter-terrorist application.
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-02-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.
Female breast symptoms in patients attended in the family medicine practice.
González-Pérez, Brian; Salas-Flores, Ricardo; Sosa-López, María Lucero; Barrientos-Guerrero, Carlos Eduardo; Hernández-Aguilar, Claudia Magdalena; Gómez-Contreras, Diana Edith; Sánchez-Garza, Jorge Arturo
2013-01-01
there are few studies on breast symptoms (BS) in patients attended at primary care units in Mexico. The aim was to determine the frequency and types of BS overall and by age-group and establish which BS were related to diagnosis of breast cancer. data from all female patients with a breast-disease-related diagnosis, attended from 2006 to 2010, at the Family Medicine Unit 38, were collected. The frequencies of BS were determined by four age-groups (< 19, 20-49, 50-69, > 70 years) and likelihood ratios for breast cancer for each breast-related symptom patient, with a 95 % confidence interval (CI). the most frequent BS in the study population were lump/mass (71.7 %) and breast pain (67.7 %) of all breast complaints, and they were more noted in women age group of 20-49 years. Overall, 120 women had breast cancer diagnosed with a median age of 53.51 + 12.7 years. Breast lump/mass had positive likelihood ratios for breast cancer 4.53 (95 % CI = 2.51-8.17) and breast pain had increased negative LR = 1.08 (95 % CI = 1.05-1.11). breast lump/mass was the predominant presenting complaint among females with breast symptoms in our primary care unit, and it was associated with elevated positive likelihood of breast cancer.
Cherven, Brooke; Mertens, Ann; Meacham, Lillian R; Williamson, Rebecca; Boring, Cathy; Wasilewski-Masker, Karen
2014-01-01
Survivors of childhood cancer are at risk for a variety of treatment-related late effects and require lifelong individualized surveillance for early detection of late effects. This study assessed knowledge and perceptions of late effects risk before and after a survivor clinic visit. Young adult survivors (≥ 16 years) and parents of child survivors (< 16 years) were recruited prior to initial visit to a cancer survivor program. Sixty-five participants completed a baseline survey and 50 completed both a baseline and follow-up survey. Participants were found to have a low perceived likelihood of developing a late effect of cancer therapy and many incorrect perceptions of risk for individual late effects. Low knowledge before clinic (odds ratio = 9.6; 95% confidence interval, 1.7-92.8; P = .02) and low perceived likelihood of developing a late effect (odds ratio = 18.7; 95% confidence interval, 2.7-242.3; P = .01) were found to predict low knowledge of late effect risk at follow-up. This suggests that perceived likelihood of developing a late effect is an important factor in the individuals' ability to learn about their risk and should be addressed before initiation of education. © 2014 by Association of Pediatric Hematology/Oncology Nurses.
Staff gender ratio and aggression in a forensic psychiatric hospital.
Daffern, Michael; Mayer, Maggie; Martin, Trish
2006-06-01
Gender balance in acute psychiatric inpatient units remains a contentious issue. In terms of maintaining staff and patient safety, 'balance' is often considered by ensuring there are 'sufficient' male nurses present on each shift. In an ongoing programme of research into aggression, the authors investigated reported incidents of patient aggression and examined the gender ratio on each shift over a 6-month period. Contrary to the popular notion that a particular gender ratio might have some relationship with the likelihood of aggressive incidents, there was no statistically significant difference in the proportion of male staff working on the shifts when there was an aggressive incident compared with the shifts when there was no aggressive incident. Further, when an incident did occur, the severity of the incident bore no relationship with the proportion of male staff working on the shift. Nor did the gender of the shift leader have an impact on the decision to seclude the patient or the likelihood of completing an incident form following an aggressive incident. Staff confidence in managing aggression may be influenced by the presence of male staff. Further, aspects of prevention and management may be influenced by staff gender. However, results suggest there is no evidence that the frequency or severity of aggression is influenced by staff gender ratio.
Vilchez Barreto, Percy M; Gamboa, Ricardo; Santivañez, Saul; O'Neal, Seth E; Muro, Claudio; Lescano, Andrés G; Moyano, Luz-Maria; Gonzálvez, Guillermo; García, Hector H
2017-08-01
Hymenolepis nana , the dwarf tapeworm, is a common intestinal infection of children worldwide. We evaluated infection and risk factor data that were previously collected from 14,761 children aged 2-15 years during a large-scale program in northern Peru. We found that 1,124 of 14,761 children (7.61%) had H. nana infection, a likely underestimate given that only a single stool sample was examined by microscopy for diagnosis. The strongest association with infection was lack of adequate water (adjusted prevalence ratio [aPR] 2.22, 95% confidence interval [CI] 1.82-2.48) and sanitation infrastructure in the house (aPR 1.94, 95% CI 1.64-2.29). One quarter of those tested did not have a bathroom or latrine at home, which doubled their likelihood of infection. Similarly, one quarter did not have piped public water to the house, which also increased the likelihood of infection. Continued efforts to improve access to basic water and sanitation services will likely reduce the burden of infection in children for this and other intestinal infections.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
Weller, Daniel; Shiwakoti, Suvash; Bergholz, Peter; Grohn, Yrjo; Wiedmann, Martin
2015-01-01
Technological advancements, particularly in the field of geographic information systems (GIS), have made it possible to predict the likelihood of foodborne pathogen contamination in produce production environments using geospatial models. Yet, few studies have examined the validity and robustness of such models. This study was performed to test and refine the rules associated with a previously developed geospatial model that predicts the prevalence of Listeria monocytogenes in produce farms in New York State (NYS). Produce fields for each of four enrolled produce farms were categorized into areas of high or low predicted L. monocytogenes prevalence using rules based on a field's available water storage (AWS) and its proximity to water, impervious cover, and pastures. Drag swabs (n = 1,056) were collected from plots assigned to each risk category. Logistic regression, which tested the ability of each rule to accurately predict the prevalence of L. monocytogenes, validated the rules based on water and pasture. Samples collected near water (odds ratio [OR], 3.0) and pasture (OR, 2.9) showed a significantly increased likelihood of L. monocytogenes isolation compared to that for samples collected far from water and pasture. Generalized linear mixed models identified additional land cover factors associated with an increased likelihood of L. monocytogenes isolation, such as proximity to wetlands. These findings validated a subset of previously developed rules that predict L. monocytogenes prevalence in produce production environments. This suggests that GIS and geospatial models can be used to accurately predict L. monocytogenes prevalence on farms and can be used prospectively to minimize the risk of preharvest contamination of produce. PMID:26590280
Kim, Youngdeok; Barreira, Tiago V; Kang, Minsoo
2016-01-01
Independent associations of physical activity (PA) and sedentary behavior (SB) with obesity are well documented. However, little is known about the combined associations of these behaviors with obesity in adolescents. The present study examines the prevalence of concurrent levels of PA and SB, and their associations with obesity among US adolescents. Data from a total of 12 081 adolescents who participated in the Youth Risk Behaviors Survey during 2012-2013 were analyzed. A latent class analysis was performed to identify latent subgroups with varying combined levels of subjectively measured PA and screen-based SB. Follow-up analysis examined the changes in the likelihood of being obese as determined by the Center for Disease Control and Prevention Growth Chart between latent subgroups. Four latent subgroups with varying combined levels of PA and SB were identified across gender. The likelihood of being obese was significantly greater for the subgroups featuring either or both Low PA or High SB when compared with High PA/Low SB across genders (odds ratio [OR] ranges, 2.1-2.7 for males and 9.6-23.5 for females). Low PA/High SB showed the greater likelihood of being obese compared to subgroups featuring either or both High PA and Low SB (OR ranges, 2.2-23.5) for female adolescents only. The findings imply that promoting sufficient levels of PA while reducing SB should be encouraged in order to reduce obesity risk among adolescents, particularly for males. The risk of obesity for female adolescents can be reduced by engaging in either high levels of PA or low levels of SB.
Mkanta, William N.; Chumbler, Neale R.; Yang, Kai; Saigal, Romesh; Abdollahi, Mohammad; Mejia de Grubb, Maria C.; Ezekekwu, Emmanuel U.
2017-01-01
Ability to predict discharge destination would be a useful way of optimizing posthospital care. We conducted a cross-sectional, multiple state study of inpatient services to assess the likelihood of home discharges in 2009 among Medicaid enrollees who were discharged following general hospitalizations. Analyses were conducted using hospitalization data from the states of California, Georgia, Michigan, and Mississippi. A total of 33 160 patients were included in the study among which 13 948 (42%) were discharged to their own homes and 19 212 (58%) were discharged to continue with institutional-based treatment. A multiple logistic regression model showed that gender, age, race, and having ambulatory care-sensitive conditions upon admission were significant predictors of home-based discharges. Females were at higher odds of home discharges in the sample (odds ratio [OR] = 1.631; 95% confidence interval [CI], 1.520-1.751), while patients with ambulatory care-sensitive conditions were less likely to get home discharges (OR = 0.739; 95% CI, 0.684-0.798). As the nation engages in the continued effort to improve the effectiveness of the health care system, cost savings are possible if providers and systems of care are able to identify admission factors with greater prospects for in-home services after discharge.
Romanens, Michel; Ackermann, Franz; Spence, John David; Darioli, Roger; Rodondi, Nicolas; Corti, Roberto; Noll, Georg; Schwenkglenks, Matthias; Pencina, Michael
2010-02-01
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.
Concepts, challenges, and successes in modeling thermodynamics of metabolism.
Cannon, William R
2014-01-01
The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.
Labronici, Pedro José; Ferreira, Leonardo Termis; Dos Santos Filho, Fernando Claudino; Pires, Robinson Esteves Santos; Gomes, Davi Coutinho Fonseca Fernandes; da Silva, Luiz Henrique Penteado; Gameiro, Vinicius Schott
2017-02-01
Several so-called casting indices are available for objective evaluation of plaster cast quality. The present study sought to investigate four of these indices (gap index, padding index, Canterbury index, and three-point index) as compared to a reference standard (cast index) for evaluation of plaster cast quality after closed reduction of pediatric displaced distal forearm fractures. Forty-three radiographs from patients with displaced distal forearm fractures requiring manipulation were reviewed. Accuracy, sensitivity, specificity, false-positive probability, false-negative probability, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio were calculated for each of the tested indices. Comparison among indices revealed diagnostic agreement in only 4.7% of cases. The strongest correlation with the cast index was found for the gap index, with a Spearman correlation coefficient of 0.94. The gap index also displayed the best agreement with the cast index, with both indices yielding the same result in 79.1% of assessments. When seeking to assess plaster cast quality, the cast index and gap index should be calculated; if both indices agree, a decision on quality can be made. If the cast and gap indices disagree, the padding index can be calculated as a tiebreaker, and the decision based on the most frequent of the three results. Calculation of the three-point index and Canterbury index appears unnecessary. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pharmacokinetic Modeling of Intranasal Scopolamine in Plasma Saliva and Urine
NASA Technical Reports Server (NTRS)
Wu, L.; Tam, V. H.; Chow, D. S. L.; Putcha, L.
2015-01-01
An intranasal gel dosage formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness (SMS). The bioavailability and pharmacokinetics (PK) were evaluated under IND (Investigational New Drug) guidelines. The aim of the project was to develop a PK model that can predict the relationships among plasma, saliva and urinary scopolamine concentrations using data collected from the IND clinical trial protocol with INSCOP. Twelve healthy human subjects were administered at three dose levels (0.1, 0.2 and 0.4 mg) of INSCOP. Serial blood, saliva and urine samples were collected between 5 min to 24 h after dosing and scopolamine concentrations were measured by using a validated LC-MS-MS assay. PK compartmental models, using actual dosing and sampling time, were established using Phoenix (version 1.2). Model selection was based on a likelihood ratio test on the difference of criteria (-2LL (i.e. log-likelihood ratio test)) and comparison of the quality of fit plots. The results: Predictable correlations among scopolamine concentrations in compartments of plasma, saliva and urine were established, and for the first time the model satisfactorily predicted the population and individual PK of INSCOP in plasma, saliva and urine. The model can be utilized to predict the INSCOP plasma concentration by saliva and urine data, and it will be useful for monitoring the PK of scopolamine in space and other remote environments using non-invasive sampling of saliva and/or urine.
Tsutsumi, Akizumi; Inoue, Akiomi; Eguchi, Hisashi
2017-07-27
The manual for the Japanese Stress Check Program recommends use of the Brief Job Stress Questionnaire (BJSQ) from among the program's instruments and proposes criteria for defining "high-stress" workers. This study aimed to examine how accurately the BJSQ identifies workers with or without potential psychological distress. We used an online survey to administer the BJSQ with a psychological distress scale (K6) to randomly selected workers (n=1,650). We conducted receiver operating characteristics curve analyses to estimate the screening performance of the cutoff points that the Stress Check Program manual recommends for the BJSQ. Prevalence of workers with potential psychological distress defined as K6 score ≥13 was 13%. Prevalence of "high-risk" workers defined using criteria recommended by the program manual was 16.7% for the original version of the BJSQ. The estimated values were as follows: sensitivity, 60.5%; specificity, 88.9%; Youden index, 0.504; positive predictive value, 47.3%; negative predictive value, 93.8%; positive likelihood ratio, 6.0; and negative likelihood ratio, 0.4. Analyses based on the simplified BJSQ indicated lower sensitivity compared with the original version, although we expected roughly the same screening performance for the best scenario using the original version. Our analyses in which psychological distress measured by K6 was set as the target condition indicate less than half of the identified "high-stress" workers warrant consideration for secondary screening for psychological distress.
Smith, Vanessa; De Keyser, Filip; Pizzorni, Carmen; Van Praet, Jens T; Decuman, Saskia; Sulli, Alberto; Deschepper, Ellen; Cutolo, Maurizio
2011-01-01
Construction of a simple nailfold videocapillaroscopic (NVC) scoring modality as a prognostic index for digital trophic lesions for day-to-day clinical use. An association with a single simple (semi)-quantitatively scored NVC parameter, mean score of capillary loss, was explored in 71 consecutive patients with systemic sclerosis (SSc), and reliable reduction in the number of investigated fields (F32-F16-F8-F4). The cut-off value of the prognostic index (mean score of capillary loss calculated over a reduced number of fields) for present/future digital trophic lesions was selected by receiver operating curve (ROC) analysis. Reduction in the number of fields for mean score of capillary loss was reliable from F32 to F8 (intraclass correlation coefficient of F16/F32: 0.97; F8/F32: 0.90). Based on ROC analysis, a prognostic index (mean score of capillary loss as calculated over F8) with a cut-off value of 1.67 is proposed. This value has a sensitivity of 72.22/70.00, specificity of 70.59/69.77, positive likelihood ratio of 2.46/2.32 and a negative likelihood ratio of 0.39/0.43 for present/future digital trophic lesions. A simple prognostic index for digital trophic lesions for daily use in SSc clinics is proposed, limited to the mean score of capillary loss as calculated over eight fields (8 fingers, 1 field per finger).
Ji, B; Jin, X-B
2017-08-01
We conducted this prospective comparative study to examine the hypothesis that varicocele was associated with hypogonadism and impaired erectile function as reflected in International Index of Erectile Function-5 (IIEF-5) scores as well as nocturnal penile tumescence and rigidity (NPTR) parameters. From December 2014 to December 2015, a total of 130 males with varicocele complaining of infertility or scrotal discomfort and 130 age-matched healthy males chosen from volunteer healthy hospital staff as controls were recruited in this study. Serum testosterone (TT) levels and IIEF-5 scores as well as NPTR parameters were evaluated and compared between varicocele and control subjects. All participants were further grouped into hypogonadism based on the cut-off value 300 ng/dL. A total of 45 of 130 patients were identified as hypogonadism, while it was not found in control subjects. A multivariate logistic regression with likelihood ratio test revealed that TT levels as well as grade III and II varicocele posed significant indicators for hypogonadism occurrence (chi-square of likelihood ratio = 12.40, df = 3, p < .01). Furthermore, TT levels and infertility duration were associated with IIEF-5 scores in a multivariate linear regression analysis (adjusted R 2 = 0.545). In conclusion, the correlation of grade III and II varicocele with an increased risk of hypogonadism was confirmed in this study and an impaired erectile function correlated with TT levels and infertility duration was also observed. © 2016 Blackwell Verlag GmbH.
A parimutuel gambling perspective to compare probabilistic seismicity forecasts
NASA Astrophysics Data System (ADS)
Zechar, J. Douglas; Zhuang, Jiancang
2014-10-01
Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.
1993-09-10
1993). A bootstrap generalizedlikelihood ratio test in discriminant analysis, Proc. 15th Annual Seismic Research Symposium, in press. I Hedlin, M., J... ratio indicate that the event does not belong to the first class. The bootstrap technique is used here as well to set the critical value of the test ...Methodist University. Baek, J., H. L. Gray, W. A. Woodward and M.D. Fisk (1993). A Bootstrap Generalized Likelihood Ratio Test in Discriminant
Håkonsen, Sasja Jul; Pedersen, Preben Ulrich; Bath-Hextall, Fiona; Kirkpatrick, Pamela
2015-05-15
Effective nutritional screening, nutritional care planning and nutritional support are essential in all settings, and there is no doubt that a health service seeking to increase safety and clinical effectiveness must take nutritional care seriously. Screening and early detection of malnutrition is crucial in identifying patients at nutritional risk. There is a high prevalence of malnutrition in hospitalized patients undergoing treatment for colorectal cancer. To synthesize the best available evidence regarding the diagnostic test accuracy of nutritional tools (sensitivity and specificity) used to identify malnutrition (specifically undernutrition) in patients with colorectal cancer (such as the Malnutrition Screening Tool and Nutritional Risk Index) compared to reference tests (such as the Subjective Global Assessment or Patient Generated Subjective Global Assessment). Patients with colorectal cancer requiring either (or all) surgery, chemotherapy and/or radiotherapy in secondary care. Focus of the review: The diagnostic test accuracy of validated assessment tools/instruments (such as the Malnutrition Screening Tool and Nutritional Risk Index) in the diagnosis of malnutrition (specifically under-nutrition) in patients with colorectal cancer, relative to reference tests (Subjective Global Assessment or Patient Generated Subjective Global Assessment). Types of studies: Diagnostic test accuracy studies regardless of study design. Studies published in English, German, Danish, Swedish and Norwegian were considered for inclusion in this review. Databases were searched from their inception to April 2014. Methodological quality was determined using the Quality Assessment of Diagnostic Accuracy Studies checklist. Data was collected using the data extraction form: the Standards for Reporting Studies of Diagnostic Accuracy checklist for the reporting of studies of diagnostic accuracy. The accuracy of diagnostic tests is presented in terms of sensitivity, specificity, positive and negative predictive values. In addition, the positive likelihood ratio (sensitivity/ [1 - specificity]) and negative likelihood ratio (1 - sensitivity)/ specificity), were also calculated and presented in this review to provide information about the likelihood that a given test result would be expected when the target condition is present compared with the likelihood that the same result would be expected when the condition is absent. Not all trials reported true positive, true negative, false positive and false negative rates, therefore these rates were calculated based on the data in the published papers. A two-by-two truth table was reconstructed for each study, and sensitivity, specificity, positive predictive value, negative predictive value positive likelihood ratio and negative likelihood ratio were calculated for each study. A summary receiver operator characteristics curve was constructed to determine the relationship between sensitivity and specificity, and the area under the summary receiver operator characteristics curve which measured the usefulness of a test was calculated. Meta-analysis was not considered appropriate, therefore data was synthesized in a narrative summary. 1. One study evaluated the Malnutrition Screening Tool against the reference standard Patient-Generated Subjective Global Assessment. The sensitivity was 56% and the specificity 84%. The positive likelihood ratio was 3.100, negative likelihood ratio was 0.59, the diagnostic odds ratio (CI 95%) was 5.20 (1.09-24.90) and the Area Under the Curve (AUC) represents only a poor to fair diagnostic test accuracy. A total of two studies evaluated the diagnostic accuracy of Malnutrition Universal Screening Tool (MUST) (index test) compared to both Subjective Global Assessment (SGA) (reference standard) and PG-SGA (reference standard) in patients with colorectal cancer. In MUST vs SGA the sensitivity of the tool was 96%, specificity was 75%, LR+ 3.826, LR- 0.058, diagnostic OR (CI 95%) 66.00 (6.61-659.24) and AUC represented excellent diagnostic accuracy. In MUST vs PG-SGA the sensitivity of the tool was 72%, specificity 48.9%, LR+ 1.382, LR- 0.579, diagnostic OR (CI 95%) 2.39 (0.87-6.58) and AUC indicated that the tool failed as a diagnostic test to identify patients with colorectal cancer at nutritional risk,. The Nutrition Risk Index (NRI) was compared to SGA representing a sensitivity of 95.2%, specificity of 62.5%, LR+ 2.521, LR- 0.087, diagnostic OR (CI 95%) 28.89 (6.93-120.40) and AUC represented good diagnostic accuracy. In regard to NRI vs PG-SGA the sensitivity of the tool was 68%, specificity 64%, LR+ 1.947, LR- 0.487, diagnostic OR (CI 95%) 4.00 (1.23-13.01) and AUC indicated poor diagnostic test accuracy. There are no single, specific tools used to screen or assess the nutritional status of colorectal cancer patients. All tools showed varied diagnostic accuracies when compared to the reference standards SGA and PG-SGA. Hence clinical judgment combined with perhaps the SGA or PG-SGA should play a major role. The PG-SGA offers several advantages over the SGA tool: 1) the patient completes the medical history component, thereby decreasing the amount of time involved; 2) it contains more nutrition impact symptoms, which are important to the patient with cancer; and 3) it has a scoring system that allows patients to be triaged for nutritional intervention. Therefore, the PG-SGA could be used as a nutrition assessment tool as it allows quick identification and prioritization of colorectal cancer patients with malnutrition in combination with other parameters. This systematic review highlights the need for the following: Further studies needs to investigate the diagnostic accuracy of already existing nutritional screening tools in the context of colorectal cancer patients. If new screenings tools are developed, they should be developed and validated in the specific clinical context within the same patient population (colorectal cancer patients). The Joanna Briggs Institute.
2014-10-06
to a subset Θ̃ of `-dimensional Euclidean space. The sub-σ-algebra Fn = FXn = σ(X n 1 ) of F is generated by the stochastic process X n 1 = (X1...developed asymptotic hypothesis testing theory is based on the SLLN and rates of convergence in the strong law for the LLR processes , specifically by...ξn to C. Write λn(θ, θ̃) = log dPnθ dPn θ̃ = ∑n k=1 log pθ(Xk|Xk−11 ) pθ̃(Xk|X k−1 1 ) for the log-likelihood ratio (LLR) process . Assume that there
Distribution of model-based multipoint heterogeneity lod scores.
Xing, Chao; Morris, Nathan; Xing, Guan
2010-12-01
The distribution of two-point heterogeneity lod scores (HLOD) has been intensively investigated because the conventional χ(2) approximation to the likelihood ratio test is not directly applicable. However, there was no study investigating th e distribution of the multipoint HLOD despite its wide application. Here we want to point out that, compared with the two-point HLOD, the multipoint HLOD essentially tests for homogeneity given linkage and follows a relatively simple limiting distribution ½χ²₀+ ½χ²₁, which can be obtained by established statistical theory. We further examine the theoretical result by simulation studies. © 2010 Wiley-Liss, Inc.
Estimating cost ratio distribution between fatal and non-fatal road accidents in Malaysia
NASA Astrophysics Data System (ADS)
Hamdan, Nurhidayah; Daud, Noorizam
2014-07-01
Road traffic crashes are a global major problem, and should be treated as a shared responsibility. In Malaysia, road accident tragedies kill 6,917 people and injure or disable 17,522 people in year 2012, and government spent about RM9.3 billion in 2009 which cost the nation approximately 1 to 2 percent loss of gross domestic product (GDP) reported annually. The current cost ratio for fatal and non-fatal accident used by Ministry of Works Malaysia simply based on arbitrary value of 6:4 or equivalent 1.5:1 depends on the fact that there are six factors involved in the calculation accident cost for fatal accident while four factors for non-fatal accident. The simple indication used by the authority to calculate the cost ratio is doubted since there is lack of mathematical and conceptual evidence to explain how this ratio is determined. The main aim of this study is to determine the new accident cost ratio for fatal and non-fatal accident in Malaysia based on quantitative statistical approach. The cost ratio distributions will be estimated based on Weibull distribution. Due to the unavailability of official accident cost data, insurance claim data both for fatal and non-fatal accident have been used as proxy information for the actual accident cost. There are two types of parameter estimates used in this study, which are maximum likelihood (MLE) and robust estimation. The findings of this study reveal that accident cost ratio for fatal and non-fatal claim when using MLE is 1.33, while, for robust estimates, the cost ratio is slightly higher which is 1.51. This study will help the authority to determine a more accurate cost ratio between fatal and non-fatal accident as compared to the official ratio set by the government, since cost ratio is an important element to be used as a weightage in modeling road accident related data. Therefore, this study provides some guidance tips to revise the insurance claim set by the Malaysia road authority, hence the appropriate method that suitable to implement in Malaysia can be analyzed.
Revisiting tests for neglected nonlinearity using artificial neural networks.
Cho, Jin Seo; Ishida, Isao; White, Halbert
2011-05-01
Tests for regression neglected nonlinearity based on artificial neural networks (ANNs) have so far been studied by separately analyzing the two ways in which the null of regression linearity can hold. This implies that the asymptotic behavior of general ANN-based tests for neglected nonlinearity is still an open question. Here we analyze a convenient ANN-based quasi-likelihood ratio statistic for testing neglected nonlinearity, paying careful attention to both components of the null. We derive the asymptotic null distribution under each component separately and analyze their interaction. Somewhat remarkably, it turns out that the previously known asymptotic null distribution for the type 1 case still applies, but under somewhat stronger conditions than previously recognized. We present Monte Carlo experiments corroborating our theoretical results and showing that standard methods can yield misleading inference when our new, stronger regularity conditions are violated.
A more powerful exact test of noninferiority from binary matched-pairs data.
Lloyd, Chris J; Moldovan, Max V
2008-08-15
Assessing the therapeutic noninferiority of one medical treatment compared with another is often based on the difference in response rates from a matched binary pairs design. This paper develops a new exact unconditional test for noninferiority that is more powerful than available alternatives. There are two new elements presented in this paper. First, we introduce the likelihood ratio statistic as an alternative to the previously proposed score statistic of Nam (Biometrics 1997; 53:1422-1430). Second, we eliminate the nuisance parameter by estimation followed by maximization as an alternative to the partial maximization of Berger and Boos (Am. Stat. Assoc. 1994; 89:1012-1016) or traditional full maximization. Based on an extensive numerical study, we recommend tests based on the score statistic, the nuisance parameter being controlled by estimation followed by maximization. 2008 John Wiley & Sons, Ltd
Determining the accuracy of maximum likelihood parameter estimates with colored residuals
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Klein, Vladislav
1994-01-01
An important part of building high fidelity mathematical models based on measured data is calculating the accuracy associated with statistical estimates of the model parameters. Indeed, without some idea of the accuracy of parameter estimates, the estimates themselves have limited value. In this work, an expression based on theoretical analysis was developed to properly compute parameter accuracy measures for maximum likelihood estimates with colored residuals. This result is important because experience from the analysis of measured data reveals that the residuals from maximum likelihood estimation are almost always colored. The calculations involved can be appended to conventional maximum likelihood estimation algorithms. Simulated data runs were used to show that the parameter accuracy measures computed with this technique accurately reflect the quality of the parameter estimates from maximum likelihood estimation without the need for analysis of the output residuals in the frequency domain or heuristically determined multiplication factors. The result is general, although the application studied here is maximum likelihood estimation of aerodynamic model parameters from flight test data.
Reed, S M; Howe, D K; Morrow, J K; Graves, A; Yeargan, M R; Johnson, A L; MacKay, R J; Furr, M; Saville, W J A; Williams, N M
2013-01-01
Recent work demonstrated the value of antigen-specific antibody indices (AI and C-value) to detect intrathecal antibody production against Sarcocystis neurona for antemortem diagnosis of equine protozoal myeloencephalitis (EPM). The study was conducted to assess whether the antigen-specific antibody indices can be reduced to a simple serum : cerebrospinal fluid (CSF) titer ratio to achieve accurate EPM diagnosis. Paired serum and CSF samples from 128 horses diagnosed by postmortem examination. The sample set included 44 EPM cases, 35 cervical-vertebral malformation (CVM) cases, 39 neurologic cases other than EPM or CVM, and 10 non-neurologic cases. Antibodies against S. neurona were measured in serum and CSF pairs using the SnSAG2 and SnSAG4/3 (SnSAG2, 4/3) ELISAs, and the ratio of each respective serum titer to CSF titer was determined. Likelihood ratios and diagnostic sensitivity and specificity were calculated based on serum titers, CSF titers, and serum : CSF titer ratios. Excellent diagnostic sensitivity and specificity was obtained from the SnSAG2, 4/3 serum : CSF titer ratio. Sensitivity and specificity of 93.2 and 81.1%, respectively, were achieved using a ratio cutoff of ≤100, whereas sensitivity and specificity were 86.4 and 95.9%, respectively, if a more rigorous cutoff of ≤50 was used. Antibody titers in CSF also provided good diagnostic accuracy. Serum antibody titers alone yielded much lower sensitivity and specificity. The study confirms the value of detecting intrathecal antibody production for antemortem diagnosis of EPM, and they further show that the antigen-specific antibody indices can be reduced in practice to a simple serum : CSF titer ratio. Copyright © 2013 by the American College of Veterinary Internal Medicine.
Shi, Jia-Xin; Li, Jia-Shu; Hu, Rong; Li, Chun-Hua; Wen, Yan; Zheng, Hong; Zhang, Feng; Li, Qin
2013-01-01
The serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is a useful biomarker in differentiating bacterial infections from others. However, the diagnostic value of sTREM-1 in bronchoalveolar lavage fluid (BALF) in lung infections has not been well established. We performed a meta-analysis to assess the accuracy of sTREM-1 in BALF for diagnosis of bacterial lung infections in intensive care unit (ICU) patients. We searched PUBMED, EMBASE and Web of Knowledge (from January 1966 to October 2012) databases for relevant studies that reported diagnostic accuracy data of BALF sTREM-1 in the diagnosis of bacterial lung infections in ICU patients. Pooled sensitivity, specificity, and positive and negative likelihood ratios were calculated by a bivariate regression analysis. Measures of accuracy and Q point value (Q*) were calculated using summary receiver operating characteristic (SROC) curve. The potential between-studies heterogeneity was explored by subgroup analysis. Nine studies were included in the present meta-analysis. Overall, the prevalence was 50.6%; the sensitivity was 0.87 (95% confidence interval (CI), 0.72-0.95); the specificity was 0.79 (95% CI, 0.56-0.92); the positive likelihood ratio (PLR) was 4.18 (95% CI, 1.78-9.86); the negative likelihood ratio (NLR) was 0.16 (95% CI, 0.07-0.36), and the diagnostic odds ratio (DOR) was 25.60 (95% CI, 7.28-89.93). The area under the SROC curve was 0.91 (95% CI, 0.88-0.93), with a Q* of 0.83. Subgroup analysis showed that the assay method and cutoff value influenced the diagnostic accuracy of sTREM-1. BALF sTREM-1 is a useful biomarker of bacterial lung infections in ICU patients. Further studies are needed to confirm the optimized cutoff value.
Mikula, A L; Hetzel, S J; Binkley, N; Anderson, P A
2017-05-01
Many osteoporosis-related vertebral fractures are unappreciated but their detection is important as their presence increases future fracture risk. We found height loss is a useful tool in detecting patients with vertebral fractures, low bone mineral density, and vitamin D deficiency which may lead to improvements in patient care. This study aimed to determine if/how height loss can be used to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency. A hospital database search in which four patient groups including those with a diagnosis of osteoporosis-related vertebral fracture, osteoporosis, osteopenia, or vitamin D deficiency and a control group were evaluated for chart-documented height loss over an average 3 1/2 to 4-year time period. Data was retrieved from 66,021 patients (25,792 men and 40,229 women). A height loss of 1, 2, 3, and 4 cm had a sensitivity of 42, 32, 19, and 14% in detecting vertebral fractures, respectively. Positive likelihood ratios for detecting vertebral fractures were 1.73, 2.35, and 2.89 at 2, 3, and 4 cm of height loss, respectively. Height loss had lower sensitivities and positive likelihood ratios for detecting low bone mineral density and vitamin D deficiency compared to vertebral fractures. Specificity of 1, 2, 3, and 4 cm of height loss was 70, 82, 92, and 95%, respectively. The odds ratios for a patient who loses 1 cm of height being in one of the four diagnostic groups compared to a patient who loses no height was higher for younger and male patients. This study demonstrated that prospective height loss is an effective tool to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency although a lack of height loss does not rule out these diagnoses. If significant height loss is present, the high positive likelihood ratios support a further workup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Thomas J.; Brown, Richard S.; Stephenson, John R.
Each year, millions of fish have telemetry tags (acoustic, radio, inductive) surgically implanted to assess their passage and survival through hydropower facilities. One route of passage of particular concern is through hydro turbines, in which fish may be exposed to a range of potential injuries, including barotraumas from rapid decompression. The change in pressure from acclimation to exposure (nadir) has been found to be an important factor in predicting the likelihood of mortality and injury for juvenile Chinook salmon undergoing rapid decompression associated with simulated turbine passage. The presence of telemetry tags has also been shown to influence the likelihoodmore » of injury and mortality for juvenile Chinook salmon. This research investigated the likelihood of mortality and injury for juvenile Chinook salmon carrying telemetry tags and exposed to a range of simulated turbine passage. Several factors were examined as predictors of mortal injury for fish undergoing rapid decompression, and the ratio of pressure change and tag burden were determined to be the most predictive factors. As the ratio of pressure change and tag burden increase, the likelihood of mortal injury also increases. The results of this study suggest that previous survival estimates of juvenile Chinook salmon passing through hydro turbines may have been biased due to the presence of telemetry tags, and this has direct implications to the management of hydroelectric facilities. Realistic examples indicate how the bias in turbine passage survival estimates could be 20% or higher, depending on the mass of the implanted tags and the ratio of acclimation to exposure pressures. Bias would increase as the tag burden and pressure ratio increase, and have direct implications on survival estimates. It is recommended that future survival studies use the smallest telemetry tags possible to minimize the potential bias that may be associated with carrying the tag.« less
The effect of mis-specification on mean and selection between the Weibull and lognormal models
NASA Astrophysics Data System (ADS)
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Al-Radi, Osman O; Harrell, Frank E; Caldarone, Christopher A; McCrindle, Brian W; Jacobs, Jeffrey P; Williams, M Gail; Van Arsdell, Glen S; Williams, William G
2007-04-01
The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems. Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used. After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001). The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.
The Fecal Microbiota Profile and Bronchiolitis in Infants
Linnemann, Rachel W.; Mansbach, Jonathan M.; Ajami, Nadim J.; Espinola, Janice A.; Petrosino, Joseph F.; Piedra, Pedro A.; Stevenson, Michelle D.; Sullivan, Ashley F.; Thompson, Amy D.; Camargo, Carlos A.
2016-01-01
BACKGROUND: Little is known about the association of gut microbiota, a potentially modifiable factor, with bronchiolitis in infants. We aimed to determine the association of fecal microbiota with bronchiolitis in infants. METHODS: We conducted a case–control study. As a part of multicenter prospective study, we collected stool samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 115 age-matched healthy controls. By applying 16S rRNA gene sequencing and an unbiased clustering approach to these 155 fecal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. RESULTS: Overall, the median age was 3 months, 55% were male, and 54% were non-Hispanic white. Unbiased clustering of fecal microbiota identified 4 distinct profiles: Escherichia-dominant profile (30%), Bifidobacterium-dominant profile (21%), Enterobacter/Veillonella-dominant profile (22%), and Bacteroides-dominant profile (28%). The proportion of bronchiolitis was lowest in infants with the Enterobacter/Veillonella-dominant profile (15%) and highest in the Bacteroides-dominant profile (44%), corresponding to an odds ratio of 4.59 (95% confidence interval, 1.58–15.5; P = .008). In the multivariable model, the significant association between the Bacteroides-dominant profile and a greater likelihood of bronchiolitis persisted (odds ratio for comparison with the Enterobacter/Veillonella-dominant profile, 4.24; 95% confidence interval, 1.56–12.0; P = .005). In contrast, the likelihood of bronchiolitis in infants with the Escherichia-dominant or Bifidobacterium-dominant profile was not significantly different compared with those with the Enterobacter/Veillonella-dominant profile. CONCLUSIONS: In this case–control study, we identified 4 distinct fecal microbiota profiles in infants. The Bacteroides-dominant profile was associated with a higher likelihood of bronchiolitis. PMID:27354456
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Gastrointestinal malignancies: when does race matter?
Fitzgerald, Timothy L; Bradley, Cathy J; Dahman, Bassam; Zervos, Emmanuel E
2009-11-01
African Americans have a poorer survival from gastrointestinal cancers. We hypothesized that socioeconomic status may explain much of this disparity. Four years of population-based Medicare and Medicaid administrative claims files were merged with the Michigan Tumor Registry. Data were identified for 18,260 patients with colorectal (n = 13,001), pancreatic (n = 2,427), gastric (n = 1,739), and esophageal (n = 1,093) cancer. Three outcomes were studied: the likelihood of late stage diagnosis, the likelihood of surgery after diagnosis, and survival. Bivariate analysis was used to compare stage and operation between African-American and Caucasian patients. Cox proportional hazard models were used to evaluate differences in survival. Statistical significance was defined as p < 0.05. In unadjusted analyses, relative to Caucasian patients, African-American patients with colorectal and esophageal cancer were more likely to present with metastatic disease, were less likely to have surgery, and were less likely to survive during the study period (p < 0.05). In a multivariate analysis, African-American patients had a higher likelihood of death from colorectal cancer than Caucasian patients. This difference, however, did not persist when late stage and surgery were taken into account (hazard ratio = 1.15, 95% CI = 1.06 to 1.24). No racial differences in survival were observed among patients with esophagus, gastric, or pancreatic cancer. These data suggest that improvements in screening and rates of operation may reduce differences in colorectal cancer outcomes between African-American and Caucasian patients. But race has little influence on survival of patients with pancreatic, esophageal, or gastric cancer.
Stafford, Mai; Cooper, Rachel; Cadar, Dorina; Carr, Ewan; Murray, Emily; Richards, Marcus; Stansfeld, Stephen; Zaninotto, Paola; Head, Jenny; Kuh, Diana
2017-01-01
Objective Policy in many industrialized countries increasingly emphasizes extended working life. We examined associations between physical and cognitive capability in mid-adulthood and work in late adulthood. Methods Using self-reported physical limitations and performance-based physical and cognitive capability at age 53, assessed by trained nurses from the Medical Research Council (MRC) National Survey of Health and Development, we examined prospective associations with extended working (captured by age at and reason for retirement from main occupation, bridge employment in paid work after retirement from the main occupation, and voluntary work participation) up to age 68 among >2000 men and women. Results Number of reported physical limitations at age 53 was associated with higher likelihood of retiring for negative reasons and lower likelihood of participating in bridge employment, adjusted for occupational class, education, partner's employment, work disability at age 53, and gender. Better performance on physical and cognitive tests was associated with greater likelihood of participating in bridge or voluntary work. Cognitive capability in the top 10% compared with the middle 80% of the distribution was associated with an odds ratio of bridge employment of 1.71 [95% confidence interval (95% CI) 1.21-2.42]. Conclusions The possibility for an extending working life is less likely to be realized by those with poorer midlife physical or cognitive capability, independently of education, and social class. Interventions to promote capability, starting in mid-adulthood or earlier, could have long-term consequences for extending working.
Remontet, L; Bossard, N; Belot, A; Estève, J
2007-05-10
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.
Dorizzi, R M; Maconi, M; Giavarina, D; Loza, G; Aman, M; Moreira, J; Bisoffi, Z; Gennuso, C
2009-10-01
The adoption of Evidence Based Laboratory Medicine (EBLM) has been hampered until today by the lack of effective tools. The SIMeL EBLM e-Thesaurus (on-line Repertoire of the diagnostic effectiveness of the laboratory, radiology and cardiology test) provides a useful support to clinical laboratory professionals and to clinicians for the interpretation of the diagnostic tests. The e-Thesaurus is an application developed using Microsoft Active Server Pages technology and carried out with Web Server Microsoft Internet Information Server and is available at the SIMeL website using a browser running JavaScript scripts (Internet Explorer is recommended). It contains a database (in Italian, English and Spanish) of the sensitivity and specificity (including the 95% confidence interval), the positive and negative likelihood ratios, the Diagnostic Odds Ratio and the Number Needed to Diagnose of more than 2000 diagnostic (most laboratory but also cardiology and radiology) tests. The e-Thesaurus improves the previous SIMeL paper and CD Thesaurus; its main features are a three languages search and a continuous and an easy updating capability.
NASA Astrophysics Data System (ADS)
Cui, Yong; Cao, Wenzhou; Li, Quan; Shen, Hua; Liu, Chao; Deng, Junpeng; Xu, Jiangfeng; Shao, Qiang
2016-05-01
Previous studies indicate that prostate cancer antigen 3 (PCA3) is highly expressed in prostatic tumors. However, its clinical value has not been characterized. The aim of this study was to investigate the clinical value of the urine PCA3 test in the diagnosis of prostate cancer by pooling the published data. Clinical trials utilizing the urine PCA3 test for diagnosing prostate cancer were retrieved from PubMed and Embase. A total of 46 clinical trials including 12,295 subjects were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio (+LR), negative likelihood ratio (-LR), diagnostic odds ratio (DOR) and area under the curve (AUC) were 0.65 (95% confidence interval [CI]: 0.63-0.66), 0.73 (95% CI: 0.72-0.74), 2.23 (95% CI: 1.91-2.62), 0.48 (95% CI: 0.44-0.52), 5.31 (95% CI: 4.19-6.73) and 0.75 (95% CI: 0.74-0.77), respectively. In conclusion, the urine PCA3 test has acceptable sensitivity and specificity for the diagnosis of prostate cancer and can be used as a non-invasive method for that purpose.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood
ERIC Educational Resources Information Center
Karabatsos, George
2017-01-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shumway, R.H.; McQuarrie, A.D.
Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less
Selection of a cardiac surgery provider in the managed care era.
Shahian, D M; Yip, W; Westcott, G; Jacobson, J
2000-11-01
Many health planners promote the use of competition to contain cost and improve quality of care. Using a standard econometric model, we examined the evidence for "value-based" cardiac surgery provider selection in eastern Massachusetts, where there is significant competition and managed care penetration. McFadden's conditional logit model was used to study cardiac surgery provider selection among 6952 patients and eight metropolitan Boston hospitals in 1997. Hospital predictor variables included beds, cardiac surgery case volume, objective clinical and financial performance, reputation (percent out-of-state referrals, cardiac residency program), distance from patient's home to hospital, and historical referral patterns. Subgroup analyses were performed for each major payer category. Distance from patient's home to hospital (odds ratio 0.90; P =.000) and the historical referral pattern from each patient's hometown (z = 45.305; P =.000) were important predictors in all models. A cardiac surgery residency enhanced the probability of selection (odds ratio 5.25; P =.000), as did percent out-of-state referrals (odds ratio 1.10; P =.001). Higher mortality rates were associated with decreased probability of selection (odds ratio 0.51; P =.027), but higher length of stay was paradoxically associated with greater probability (odds ratio 1.72; P =.000). Total hospital costs were irrelevant (odds ratio 1.00; P =.179). When analyzed by payer subgroup, Medicare patients appeared to select hospitals with both low mortality (odds ratio 0.43; P =.176) and short length of stay (odds ratio 0.76; P =.213), although the results did not achieve statistical significance. The commercial managed care subgroup exhibited the least "value-based" behavior. The odds ratio for length of stay was the highest of any group (odds ratio = 2.589; P =.000) and there was a subset of hospitals for which higher mortality was actually associated with greater likelihood of selection. The observable determinants of cardiac surgery provider selection are related to hospital reputation, historical referral patterns, and patient proximity, not objective clinical or cost performance. The paradoxic behavior of commercial managed care probably results from unobserved choice factors that are not primarily based on objective provider performance.
Revision of an automated microseismic location algorithm for DAS - 3C geophone hybrid array
NASA Astrophysics Data System (ADS)
Mizuno, T.; LeCalvez, J.; Raymer, D.
2017-12-01
Application of distributed acoustic sensing (DAS) has been studied in several areas in seismology. One of the areas is microseismic reservoir monitoring (e.g., Molteni et al., 2017, First Break). Considering the present limitations of DAS, which include relatively low signal-to-noise ratio (SNR) and no 3C polarization measurements, a DAS - 3C geophone hybrid array is a practical option when using a single monitoring well. Considering the large volume of data from distributed sensing, microseismic event detection and location using a source scanning type algorithm is a reasonable choice, especially for real-time monitoring. The algorithm must handle both strain rate along the borehole axis for DAS and particle velocity for 3C geophones. Only a small quantity of large SNR events will be detected throughout a large aperture encompassing the hybrid array; therefore, the aperture is to be optimized dynamically to eliminate noisy channels for a majority of events. For such hybrid array, coalescence microseismic mapping (CMM) (Drew et al., 2005, SPE) was revised. CMM forms a likelihood function of location of event and its origin time. At each receiver, a time function of event arrival likelihood is inferred using an SNR function, and it is migrated to time and space to determine hypocenter and origin time likelihood. This algorithm was revised to dynamically optimize such a hybrid array by identifying receivers where a microseismic signal is possibly detected and using only those receivers to compute the likelihood function. Currently, peak SNR is used to select receivers. To prevent false results due to small aperture, a minimum aperture threshold is employed. The algorithm refines location likelihood using 3C geophone polarization. We tested this algorithm using a ray-based synthetic dataset. Leaney (2014, PhD thesis, UBC) is used to compute particle velocity at receivers. Strain rate along the borehole axis is computed from particle velocity as DAS microseismic synthetic data. The likelihood function formed by both DAS and geophone behaves as expected with the aperture dynamically selected depending on the SNR of the event. We conclude that this algorithm can be successfully applied for such hybrid arrays to monitor microseismic activity. A study using a recently acquired dataset is planned.
Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M
2013-08-30
In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models. Copyright © 2013 John Wiley & Sons, Ltd.
Extending the Li&Ma method to include PSF information
NASA Astrophysics Data System (ADS)
Nievas-Rosillo, M.; Contreras, J. L.
2016-02-01
The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.
Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A
2008-01-01
Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.
Aragón-Sánchez, J; Lipsky, Benjamin A; Lázaro-Martínez, J L
2011-02-01
To investigate the accuracy of the sequential combination of the probe-to-bone test and plain X-rays for diagnosing osteomyelitis in the foot of patients with diabetes. We prospectively compiled data on a series of 338 patients with diabetes with 356 episodes of foot infection who were hospitalized in the Diabetic Foot Unit of La Paloma Hospital from 1 October 2002 to 31 April 2010. For each patient we did a probe-to-bone test at the time of the initial evaluation and then obtained plain X-rays of the involved foot. All patients with positive results on either the probe-to-bone test or plain X-ray underwent an appropriate surgical procedure, which included obtaining a bone specimen that was processed for histology and culture. We calculated the sensitivity, specificity, predictive values and likelihood ratios of the procedures, using the histopathological diagnosis of osteomyelitis as the criterion standard. Overall, 72.4% of patients had histologically proven osteomyelitis, 85.2% of whom had positive bone culture. The performance characteristics of both the probe-to-bone test and plain X-rays were excellent. The sequential diagnostic approach had a sensitivity of 0.97, specificity of 0.92, positive predictive value of 0.97, negative predictive value of 0.93, positive likelihood ratio of 12.8 and negative likelihood ratio of 0.02. Only 6.6% of patients with negative results on both diagnostic studies had osteomyelitis. Clinicians seeing patients in a setting similar to ours (specialized diabetic foot unit with a high prevalence of osteomyelitis) can confidently diagnose diabetic foot osteomyelitis when either the probe-to-bone test or a plain X-ray, or especially both, are positive. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Ablordeppey, Enyo A; Drewry, Anne M; Beyer, Alexander B; Theodoro, Daniel L; Fowler, Susan A; Fuller, Brian M; Carpenter, Christopher R
2017-04-01
We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared with chest radiography. PubMed, Embase, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov. Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2 × 2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Fifteen studies with 1,553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 (0.77-0.86) and 0.98 (0.97-0.99), respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 (14.72-65.78) and 0.25 (0.13-0.47). The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography.
Simental-Mendía, Luis E; Simental-Mendía, Esteban; Rodríguez-Hernández, Heriberto; Rodríguez-Morán, Martha; Guerrero-Romero, Fernando
2016-01-01
Introduction and aim. Given that early identification of non-alcoholic fatty liver disease (NAFLD) is an important issue for primary prevention of hepatic disease, the objectives of this study were to evaluate the efficacy of the product of triglyceride and glucose levels (TyG) for screening simple steatosis and non-alcoholic steatohepatitis (NASH) in asymptomatic women, and to compare its efficacy vs. other biomarkers for recognizing NAFLD. Asymptomatic women aged 20 to 65 years were enrolled into a cross-sectional study. The optimal values of TyG, for screening simple steatosis and NASH were established on a Receiver Operating Characteristic scatter plot; the sensitivity, specificity, and likelihood ratios of TyG index were estimated versus liver biopsy. According sensitivity and specificity, the efficacy of TyG was compared versus the well-known clinical biomarkers for recognizing NAFLD. A total of 50 asymptomatic women were enrolled. The best cutoff point of TyG for screening simple steatosis was 4.58 (sensitivity 0.94, specificity 0.69); in addition, the best cutoff point of TyG index for screening NASH was 4.59 (sensitivity 0.87, specificity 0.69). The positive and negative likelihood ratios were 3.03 and 0.08 for simple steatosis, and 2.80 and 0.18 for NASH. As compared versus SteatoTest, NashTest, Fatty liver index, and Algorithm, the TyG showed to be the best test for screening. TyG has high sensitivity and low negative likelihood ratio; as compared with other clinical biomarkers, the TyG showed to be the best test for screening simple steatosis and NASH.
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2017-07-01
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.
Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.
Urban, Lawrence M; MacNeil, Brian J
2015-08-01
Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.
Evaluation of Smoking Prevention Television Messages Based on the Elaboration Likelihood Model
ERIC Educational Resources Information Center
Flynn, Brian S.; Worden, John K.; Bunn, Janice Yanushka; Connolly, Scott W.; Dorwaldt, Anne L.
2011-01-01
Progress in reducing youth smoking may depend on developing improved methods to communicate with higher risk youth. This study explored the potential of smoking prevention messages based on the Elaboration Likelihood Model (ELM) to address these needs. Structured evaluations of 12 smoking prevention messages based on three strategies derived from…
Anticipating cognitive effort: roles of perceived error-likelihood and time demands.
Dunn, Timothy L; Inzlicht, Michael; Risko, Evan F
2017-11-13
Why are some actions evaluated as effortful? In the present set of experiments we address this question by examining individuals' perception of effort when faced with a trade-off between two putative cognitive costs: how much time a task takes vs. how error-prone it is. Specifically, we were interested in whether individuals anticipate engaging in a small amount of hard work (i.e., low time requirement, but high error-likelihood) vs. a large amount of easy work (i.e., high time requirement, but low error-likelihood) as being more effortful. In between-subject designs, Experiments 1 through 3 demonstrated that individuals anticipate options that are high in perceived error-likelihood (yet less time consuming) as more effortful than options that are perceived to be more time consuming (yet low in error-likelihood). Further, when asked to evaluate which of the two tasks was (a) more effortful, (b) more error-prone, and (c) more time consuming, effort-based and error-based choices closely tracked one another, but this was not the case for time-based choices. Utilizing a within-subject design, Experiment 4 demonstrated overall similar pattern of judgments as Experiments 1 through 3. However, both judgments of error-likelihood and time demand similarly predicted effort judgments. Results are discussed within the context of extant accounts of cognitive control, with considerations of how error-likelihood and time demands may independently and conjunctively factor into judgments of cognitive effort.
Effect of a laboratory result pager on provider behavior in a neonatal intensive care unit.
Samal, L; Stavroudis, Ta; Miller, Re; Lehmann, Hp; Lehmann, Cu
2011-01-01
A computerized laboratory result paging system (LRPS) that alerts providers about abnormal results ("push") may improve upon active laboratory result review ("pull"). However, implementing such a system in the intensive care setting may be hindered by low signal-to-noise ratio, which may lead to alert fatigue. To evaluate the impact of an LRPS in a Neonatal Intensive Care Unit. Utilizing paper chart review, we tallied provider orders following an abnormal laboratory result before and after implementation of an LRPS. Orders were compared with a predefined set of appropriate orders for such an abnormal result. The likelihood of a provider response in the post-implementation period as compared to the pre-implementation period was analyzed using logistic regression. The provider responses were analyzed using logistic regression to control for potential confounders. The likelihood of a provider response to an abnormal laboratory result did not change significantly after implementation of an LRPS. (Odds Ratio 0.90, 95% CI 0.63-1.30, p-value 0.58) However, when providers did respond to an alert, the type of response was different. The proportion of repeat laboratory tests increased. (26/378 vs. 7/278, p-value = 0.02). Although the laboratory result pager altered healthcare provider behavior in the Neonatal Intensive Care Unit, it did not increase the overall likelihood of provider response.
Mundell, Benjamin F; Kremers, Hilal Maradit; Visscher, Sue; Hoppe, Kurtis M; Kaufman, Kenton R
2016-08-01
Prior studies have identified age as a factor in determining an individual's likelihood of receiving a prosthesis following a lower limb amputation. These studies are limited to specific subsets of the general population and are unable to account for preamputation characteristics within their study populations. Our study seeks to determine the effect of preamputation characteristics on the probability of receiving a prosthesis for the general population in the United States. To identify preamputation characteristics that predict of the likelihood of receiving a prosthesis following an above-knee amputation. A retrospective, population-based cohort study. Olmsted County, Minnesota (2010 population: 144,248). Individuals (n = 93) over the age of 18 years who underwent an above-knee amputation, that is, knee disarticulation or transfemoral amputation, while residing in Olmsted County, MN, between 1987 and 2013. Characteristics affecting the receipt of a prosthesis were analyzed using a logistic regression and a random forest algorithm for classification trees. Preamputation characteristics included age, gender, amputation etiology, year of amputation, mobility, cognitive ability, comorbidities, and time between surgery and the prosthesis decision. The association of preamputation characteristics with the receipt of a prosthesis following an above-knee amputation. Twenty-four of the participants received a prosthesis. The odds of receiving a prosthesis were almost 30 times higher in those able to walk independently prior to an amputation relative to those who could not walk independently. A 10-year increase in age was associated with a 53.8% decrease in the likelihood of being fit for a prosthesis (odds ratio = 0.462, P =.030). Time elapsed between surgery and the prosthesis decision was associated with a rise in probability of receiving a prosthesis for the first 3 months in the random forest algorithm. No other observed characteristics were associated with receipt of a prosthesis. The association of preamputation mobility and age with the likelihood of being fit for a prosthesis is well understood. The effect of age, after controlling for confounders, still persists and is associated with the likelihood of being fit for a prosthesis. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Phase History Decomposition for efficient Scatterer Classification in SAR Imagery
2011-09-15
frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture
2003-09-30
multistatic active detection, and incorporated this characterization into a Bayesian track - before - detect system called, the Likelihood Ratio Tracker (LRT...prediction uncertainty in a track before detect system for multistatic active sonar. The approach has worked well on limited simulation data. IMPACT
Effects of Methamphetamine on Vigilance and Tracking during Extended Wakefulness.
1993-09-01
the log likelihood ratio (log(p); Green & Swets, 1966; Macmillan & Creelman , 1990), was also derived from hit and false-alarm probabilities...vigilance task. Canadian Journal of Psychology, 19, 104-110. Macmillan, N.E., & Creelman , C.D. (1990). Response bias: Characteristics of detection
1981-08-01
RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli
Localizing multiple X chromosome-linked retinitis pigmentosa loci using multilocus homogeneity tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, J.; Terwilliger, J.D.; Bhattacharya, S.
1990-01-01
Multilocus linkage analysis of 62 family pedigrees with X chromosome-linked retinitis pigmentosa (XLRP) was undertaken to determine the presence of possible multiple disease loci and to reliability estimate their map location. Multilocus homogeneity tests furnish convincing evidence for the presence of two XLRP loci, the likelihood ratio being 6.4 {times} 10{sup 9}:1 in a favor of two versus a single XLRP locus and gave accurate estimates for their map location. In 60-75% of the families, location of an XLRP gene was estimated at 1 centimorgan distal to OTC, and in 25-40% of the families, an XLRP locus was located halfwaymore » between DXS14 (p58-1) and DXZ1 (Xcen), with an estimated recombination fraction of 25% between the two XLRP loci. There is also good evidence for third XLRP locus, midway between DXS28 (C7) and DXS164 (pERT87), supported by a likelihood ratio of 293:1 for three versus two XLRP loci.« less
2014-01-01
Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298
NASA Astrophysics Data System (ADS)
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
Youngstrom, Eric A
2014-03-01
To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.
Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.
Morrison, Geoffrey Stewart; Poh, Norman
2018-05-01
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Sousa, Carlos Augusto Moreira de; Bahia, Camila Alves; Constantino, Patrícia
2016-12-01
Brazil has the sixth largest bicycles fleet in the world and bicycle is the most used individual transport vehicle in the country. Few studies address the issue of cyclists' accidents and factors that contribute to or prevent this event. VIVA is a cross-sectional survey and is part of the Violence and Accidents Surveillance System, Brazilian Ministry of Health. We used complex sampling and subsequent data review through multivariate logistic regression and calculation of the respective odds ratios. Odds ratios showed greater likelihood of cyclists' accidents in males, people with less schooling and living in urban and periurban areas. People who were not using the bike to go to work were more likely to suffer an accident. The profile found in this study corroborates findings of other studies. They claim that the coexistence of cyclists and other means of transportation in the same urban space increases the likelihood of accidents. The construction of bicycle-exclusive spaces and educational campaigns are required.
The Hypothesis-Driven Physical Examination.
Garibaldi, Brian T; Olson, Andrew P J
2018-05-01
The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.
A Comparison of a Bayesian and a Maximum Likelihood Tailored Testing Procedure.
ERIC Educational Resources Information Center
McKinley, Robert L.; Reckase, Mark D.
A study was conducted to compare tailored testing procedures based on a Bayesian ability estimation technique and on a maximum likelihood ability estimation technique. The Bayesian tailored testing procedure selected items so as to minimize the posterior variance of the ability estimate distribution, while the maximum likelihood tailored testing…
An ERTS-1 investigation for Lake Ontario and its basin
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.
1975-01-01
The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.
A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.
Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying
2018-06-13
The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.
Likelihood-based modification of experimental crystal structure electron density maps
Terwilliger, Thomas C [Sante Fe, NM
2005-04-16
A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Bechard, Lori J; Duggan, Christopher; Touger-Decker, Riva; Parrott, J Scott; Rothpletz-Puglia, Pamela; Byham-Gray, Laura; Heyland, Daren; Mehta, Nilesh M
2016-08-01
To determine the influence of admission anthropometry on clinical outcomes in mechanically ventilated children in the PICU. Data from two multicenter cohort studies were compiled to examine the unique contribution of nutritional status, defined by body mass index z score, to 60-day mortality, hospital-acquired infections, length of hospital stay, and ventilator-free days, using multivariate analysis. Ninety PICUs from 16 countries with eight or more beds. Children aged 1 month to 18 years, admitted to each participating PICU and requiring mechanical ventilation for more than 48 hours. Data from 1,622 eligible patients, 54.8% men and mean (SD) age 4.5 years (5.1), were analyzed. Subjects were classified as underweight (17.9%), normal weight (54.2%), overweight (14.5%), and obese (13.4%) based on body mass index z score at admission. After adjusting for severity of illness and site, the odds of 60-day mortality were higher in underweight (odds ratio, 1.53; p < 0.001) children. The odds of hospital-acquired infections were higher in underweight (odds ratio, 1.88; p = 0.008) and obese (odds ratio, 1.64; p < 0.001) children. Hazard ratios for hospital discharge were lower among underweight (hazard ratio, 0.71; p < 0.001) and obese (hazard ratio, 0.82; p = 0.04) children. Underweight was associated with 1.3 (p = 0.001) and 1.6 (p < 0.001) fewer ventilator-free days than normal weight and overweight, respectively. Malnutrition is prevalent in mechanically ventilated children on admission to PICUs worldwide. Classification as underweight or obese was associated with higher risk of hospital-acquired infections and lower likelihood of hospital discharge. Underweight children had a higher risk of mortality and fewer ventilator-free days.
Dias-Silva, Diogo; Pimentel-Nunes, Pedro; Magalhães, Joana; Magalhães, Ricardo; Veloso, Nuno; Ferreira, Carlos; Figueiredo, Pedro; Moutinho, Pedro; Dinis-Ribeiro, Mário
2014-06-01
A simplified narrow-band imaging (NBI) endoscopy classification of gastric precancerous and cancerous lesions was derived and validated in a multicenter study. This classification comes with the need for dissemination through adequate training. To address the learning curve of this classification by endoscopists with differing expertise and to assess the feasibility of a YouTube-based learning program to disseminate it. Prospective study. Five centers. Six gastroenterologists (3 trainees, 3 fully trained endoscopists [FTs]). Twenty tests provided through a Web-based program containing 10 randomly ordered NBI videos of gastric mucosa were taken. Feedback was sent 7 days after every test submission. Measures of accuracy of the NBI classification throughout the time. From the first to the last 50 videos, a learning curve was observed with a 10% increase in global accuracy, for both trainees (from 64% to 74%) and FTs (from 56% to 65%). After 200 videos, sensitivity and specificity of 80% and higher for intestinal metaplasia were observed in half the participants, and a specificity for dysplasia greater than 95%, along with a relevant likelihood ratio for a positive result of 7 to 28 and likelihood ratio for a negative result of 0.21 to 0.82, were achieved by all of the participants. No constant learning curve was observed for the identification of Helicobacter pylori gastritis and sensitivity to dysplasia. The trainees had better results in all of the parameters, except specificity for dysplasia, compared with the FTs. Globally, participants agreed that the program's structure was adequate, except on the feedback, which should have consisted of a more detailed explanation of each answer. No formal sample size estimate. A Web-based learning program could be used to teach and disseminate classifications in the endoscopy field. In this study, an NBI classification for gastric mucosal features seems to be easily learned for the identification of gastric preneoplastic lesions. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Estimating Function Approaches for Spatial Point Processes
NASA Astrophysics Data System (ADS)
Deng, Chong
Spatial point pattern data consist of locations of events that are often of interest in biological and ecological studies. Such data are commonly viewed as a realization from a stochastic process called spatial point process. To fit a parametric spatial point process model to such data, likelihood-based methods have been widely studied. However, while maximum likelihood estimation is often too computationally intensive for Cox and cluster processes, pairwise likelihood methods such as composite likelihood, Palm likelihood usually suffer from the loss of information due to the ignorance of correlation among pairs. For many types of correlated data other than spatial point processes, when likelihood-based approaches are not desirable, estimating functions have been widely used for model fitting. In this dissertation, we explore the estimating function approaches for fitting spatial point process models. These approaches, which are based on the asymptotic optimal estimating function theories, can be used to incorporate the correlation among data and yield more efficient estimators. We conducted a series of studies to demonstrate that these estmating function approaches are good alternatives to balance the trade-off between computation complexity and estimating efficiency. First, we propose a new estimating procedure that improves the efficiency of pairwise composite likelihood method in estimating clustering parameters. Our approach combines estimating functions derived from pairwise composite likeli-hood estimation and estimating functions that account for correlations among the pairwise contributions. Our method can be used to fit a variety of parametric spatial point process models and can yield more efficient estimators for the clustering parameters than pairwise composite likelihood estimation. We demonstrate its efficacy through a simulation study and an application to the longleaf pine data. Second, we further explore the quasi-likelihood approach on fitting second-order intensity function of spatial point processes. However, the original second-order quasi-likelihood is barely feasible due to the intense computation and high memory requirement needed to solve a large linear system. Motivated by the existence of geometric regular patterns in the stationary point processes, we find a lower dimension representation of the optimal weight function and propose a reduced second-order quasi-likelihood approach. Through a simulation study, we show that the proposed method not only demonstrates superior performance in fitting the clustering parameter but also merits in the relaxation of the constraint of the tuning parameter, H. Third, we studied the quasi-likelihood type estimating funciton that is optimal in a certain class of first-order estimating functions for estimating the regression parameter in spatial point process models. Then, by using a novel spectral representation, we construct an implementation that is computationally much more efficient and can be applied to more general setup than the original quasi-likelihood method.
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software “Kongoh” for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1–4 persons’ contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI’s contribution in true contributors and non-contributors by using 2–4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI’s contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples. PMID:29149210
Manabe, Sho; Morimoto, Chie; Hamano, Yuya; Fujimoto, Shuntaro; Tamaki, Keiji
2017-01-01
In criminal investigations, forensic scientists need to evaluate DNA mixtures. The estimation of the number of contributors and evaluation of the contribution of a person of interest (POI) from these samples are challenging. In this study, we developed a new open-source software "Kongoh" for interpreting DNA mixture based on a quantitative continuous model. The model uses quantitative information of peak heights in the DNA profile and considers the effect of artifacts and allelic drop-out. By using this software, the likelihoods of 1-4 persons' contributions are calculated, and the most optimal number of contributors is automatically determined; this differs from other open-source software. Therefore, we can eliminate the need to manually determine the number of contributors before the analysis. Kongoh also considers allele- or locus-specific effects of biological parameters based on the experimental data. We then validated Kongoh by calculating the likelihood ratio (LR) of a POI's contribution in true contributors and non-contributors by using 2-4 person mixtures analyzed through a 15 short tandem repeat typing system. Most LR values obtained from Kongoh during true-contributor testing strongly supported the POI's contribution even for small amounts or degraded DNA samples. Kongoh correctly rejected a false hypothesis in the non-contributor testing, generated reproducible LR values, and demonstrated higher accuracy of the estimated number of contributors than another software based on the quantitative continuous model. Therefore, Kongoh is useful in accurately interpreting DNA evidence like mixtures and small amounts or degraded DNA samples.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Diagnostic Capability of Spectral Domain Optical Coherence Tomography for Glaucoma
Wu, Huijuan; de Boer, Johannes F.; Chen, Teresa C.
2012-01-01
Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (OCT) in glaucoma patients with visual field (VF) defects. Design Prospective, cross-sectional study. Methods Setting Participants were recruited from a university hospital clinic. Study Population One eye of 85 normal subjects and 61 glaucoma patients [with average VF mean deviation (MD) of -9.61 ± 8.76 dB] were randomly selected for the study. A subgroup of the glaucoma patients with early VF defects was calculated separately. Observation Procedures Spectralis OCT circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. Main Outcome Measures To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve (AROC), sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Results Overall RNFL thickness had the highest AROC value (0.952 for all patients, 0.895 for the early glaucoma subgroup). For all patients, the highest sensitivity (98.4%, CI 96.3-100%) was achieved by using two criteria: ≥1 RNFL sectors being abnormal at the < 5% level, and overall classification of borderline or outside normal limits, with specificities of 88.9% (CI 84.0-94.0%) and 87.1% (CI 81.6-92.5%) respectively for these two criteria. Conclusions Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral domain OCT were good for early perimetric glaucoma and excellent for moderately-advanced perimetric glaucoma. PMID:22265147
Divorce or end of cohabitation among Danish women evaluated for fertility problems.
Kjaer, Trille; Albieri, Vanna; Jensen, Allan; Kjaer, Susanne K; Johansen, Christoffer; Dalton, Susanne O
2014-03-01
Couples with fertility problems may experience marital or sexual distress which could potentially result in dissolved relationships. We investigated the likelihood of ending a relationship among women who did not have a child after a fertility evaluation. Longitudinal cohort study. Danish women ever referred for primary or secondary fertility problems to a public Danish hospital or private fertility clinic between 1990 and 2006. A total of 47,515 women. The data were linked to Danish administrative population-based registries containing demographic and socioeconomic information. Discrete-time survival models were used with person-period data. Each woman was followed from the year of her initial fertility evaluation through to 2007. Effects of parity after a fertility evaluation on the likelihood of ending a marital or cohabitation relationship. After up to 12 years of follow up, nearly 27% of the women were no longer living with the person with whom they had lived at the time of the fertility evaluation. Women who did not have a child after the evaluation had significantly higher odds ratios for ending a relationship up to 12 years after the evaluation (with odds ratios up to 3.13, 95% CI 2.88-3.41) than women who had a child, regardless of their parity before the evaluation. Parity after a fertility evaluation may be an important component in the longitudinal relationships of couples with fertility problems. Studies with detailed information on marital quality and relational well-being of couples with fertility problems are needed. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.
Case finding of lifestyle and mental health disorders in primary care: validation of the ‘CHAT’ tool
Goodyear-Smith, Felicity; Coupe, Nicole M; Arroll, Bruce; Elley, C Raina; Sullivan, Sean; McGill, Anne-Thea
2008-01-01
Background Primary care is accessible and ideally placed for case finding of patients with lifestyle and mental health risk factors and subsequent intervention. The short self-administered Case-finding and Help Assessment Tool (CHAT) was developed for lifestyle and mental health assessment of adult patients in primary health care. This tool checks for tobacco use, alcohol and other drug misuse, problem gambling, depression, anxiety and stress, abuse, anger problems, inactivity, and eating disorders. It is well accepted by patients, GPs and nurses. Aim To assess criterion-based validity of CHAT against a composite gold standard. Design of study Conducted according to the Standards for Reporting of Diagnostic Accuracy statement for diagnostic tests. Setting Primary care practices in Auckland, New Zealand. Method One thousand consecutive adult patients completed CHAT and a composite gold standard. Sensitivities, specificities, positive and negative predictive values, and likelihood ratios were calculated. Results Response rates for each item ranged from 79.6 to 99.8%. CHAT was sensitive and specific for almost all issues screened, except exercise and eating disorders. Sensitivity ranged from 96% (95% confidence interval [CI] = 87 to 99%) for major depression to 26% (95% CI = 22 to 30%) for exercise. Specificity ranged from 97% (95% CI = 96 to 98%) for problem gambling and problem drug use to 40% (95% CI = 36 to 45%) for exercise. All had high likelihood ratios (3–30), except exercise and eating disorders. Conclusion CHAT is a valid and acceptable case-finding tool for most common lifestyle and mental health conditions. PMID:18186993
Cui, Jiangyu; Zhou, Yumin; Tian, Jia; Wang, Xinwang; Zheng, Jingping; Zhong, Nanshan; Ran, Pixin
2012-12-01
COPD is often underdiagnosed in a primary care setting where the spirometry is unavailable. This study was aimed to develop a simple, economical and applicable model for COPD screening in those settings. First we established a discriminant function model based on Bayes' Rule by stepwise discriminant analysis, using the data from 243 COPD patients and 112 non-COPD subjects from our COPD survey in urban and rural communities and local primary care settings in Guangdong Province, China. We then used this model to discriminate COPD in additional 150 subjects (50 non-COPD and 100 COPD ones) who had been recruited by the same methods as used to have established the model. All participants completed pre- and post-bronchodilator spirometry and questionnaires. COPD was diagnosed according to the Global Initiative for Chronic Obstructive Lung Disease criteria. The sensitivity and specificity of the discriminant function model was assessed. THE ESTABLISHED DISCRIMINANT FUNCTION MODEL INCLUDED NINE VARIABLES: age, gender, smoking index, body mass index, occupational exposure, living environment, wheezing, cough and dyspnoea. The sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, accuracy and error rate of the function model to discriminate COPD were 89.00%, 82.00%, 4.94, 0.13, 86.66% and 13.34%, respectively. The accuracy and Kappa value of the function model to predict COPD stages were 70% and 0.61 (95% CI, 0.50 to 0.71). This discriminant function model may be used for COPD screening in primary care settings in China as an alternative option instead of spirometry.
Mosquera, Victor X; Marini, Milagros; Muñiz, Javier; Asorey-Veiga, Vanesa; Adrio-Nazar, Belen; Boix, Ricardo; Lopez-Perez, José M; Pradas-Montilla, Gonzalo; Cuenca, José J
2012-09-01
To develop a risk score based on physical examination and chest X-ray findings to rapidly identify major trauma patients at risk of acute traumatic aortic injury (ATAI). A multicenter retrospective study was conducted with 640 major trauma patients with associated blunt chest trauma classified into ATAI (aortic injury) and NATAI (no aortic injury) groups. The score data set included 76 consecutive ATAI and 304 NATAI patients from a single center, whereas the validation data set included 52 consecutive ATAI and 208 NATAI patients from three independent institutions. Bivariate analysis identified variables potentially influencing the presentation of aortic injury. Confirmed variables by logistic regression were assigned a score according to their corresponding beta coefficient which was rounded to the closest integer value (1-4). Predictors of aortic injury included widened mediastinum, hypotension less than 90 mmHg, long bone fracture, pulmonary contusion, left scapula fracture, hemothorax, and pelvic fracture. Area under receiver operating characteristic curve was 0.96. In the score data set, sensitivity was 93.42 %, specificity 85.85 %, Youden's index 0.79, positive likelihood ratio 6.60, and negative likelihood ratio 0.08. In the validation data set, sensitivity was 92.31 % and specificity 85.1 %. Given the relative infrequency of traumatic aortic injury, which often leads to missed or delayed diagnosis, application of our score has the potential to draw necessary clinical attention to the possibility of aortic injury, thus providing the chance of a prompt specific diagnostic and therapeutic management.
Direct bound on the total decay width of the top quark in pp collisions at sqrt[s]=1.96 TeV.
Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H S; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S
2009-01-30
We present the first direct experimental bound on the total decay width of the top quark, Gamma(t), using 955 pb(-1) of the Tevatron's pp collisions recorded by the Collider Detector at Fermilab. We identify 253 top-antitop pair candidate events. The distribution of reconstructed top quark mass from these events is fitted to templates representing different values of the top quark width. Using a confidence interval based on likelihood-ratio ordering, we extract an upper limit at 95% C.L. of Gamma(t)<13.1 GeV for an assumed top quark mass of 175 GeV/c(2).
2013-01-01
Background Mental health problems are common in the work force and influence work capacity and sickness absence. The aim was to examine self-assessed mental health problems and work capacity as determinants of time until return to work (RTW). Methods Employed women and men (n=6140), aged 19–64 years, registered as sick with all-cause sickness absence between February 18 and April 15, 2008 received a self-administered questionnaire covering health and work situation (response rate 54%). Demographic data was collected from official registers. This follow-up study included 2502 individuals. Of these, 1082 were currently off sick when answering the questionnaire. Register data on total number of benefit compensated sick-leave days in the end of 2008 were used to determine the time until RTW. Self-reported persistent mental illness, the WHO (Ten) Mental Well-Being Index and self-assessed work capacity in relation to knowledge, mental, collaborative and physical demands at work were used as determinants. Multinomial and binary logistic regression analyses were used to estimate odds ratios with 95% confidence intervals (CI) for the likelihood of RTW. Results The likelihood of RTW (≥105 days) was higher among those with persistent mental illness OR= 2.97 (95% CI, 2.10-4.20) and those with low mental well-being OR= 2.89 (95% CI, 2.31-3.62) after adjusting for gender, age, SES, hours worked and sick leave 2007. An analysis of employees who were off sick when they answered the questionnaire, the likelihood of RTW (≥105 days) was higher among those who reported low capacity to work in relation to knowledge, mental, collaborative and physical demands at work. In a multivariable analysis, the likelihood of RTW (≥105 days) among those with low mental well-being remained significant OR=1.93 (95% CI 1.46-2.55) even after adjustment for all dimensions of capacity to work. Conclusion Self-assessed persistent mental illness, low mental well-being and low work capacity increased the likelihood of prolonged RTW. This study is unique because it is based on new sick-leave spells and is the first to show that low mental well-being was a strong determinant of RTW even after adjustment for work capacity. Our findings support the importance of identifying individuals with low mental well-being as a way to promote RTW. PMID:24124982
Macera, Márcia A C; Louzada, Francisco; Cancho, Vicente G; Fontes, Cor J F
2015-03-01
In this paper, we introduce a new model for recurrent event data characterized by a baseline rate function fully parametric, which is based on the exponential-Poisson distribution. The model arises from a latent competing risk scenario, in the sense that there is no information about which cause was responsible for the event occurrence. Then, the time of each recurrence is given by the minimum lifetime value among all latent causes. The new model has a particular case, which is the classical homogeneous Poisson process. The properties of the proposed model are discussed, including its hazard rate function, survival function, and ordinary moments. The inferential procedure is based on the maximum likelihood approach. We consider an important issue of model selection between the proposed model and its particular case by the likelihood ratio test and score test. Goodness of fit of the recurrent event models is assessed using Cox-Snell residuals. A simulation study evaluates the performance of the estimation procedure in the presence of a small and moderate sample sizes. Applications on two real data sets are provided to illustrate the proposed methodology. One of them, first analyzed by our team of researchers, considers the data concerning the recurrence of malaria, which is an infectious disease caused by a protozoan parasite that infects red blood cells. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The prospect of predictive testing for personal risk: attitudes and decision making.
Wroe, A L; Salkovskis, P M; Rimes, K A
1998-06-01
As predictive tests for medical problems such as genetic disorders become more widely available, it becomes increasingly important to understand the processes involved in the decision whether or not to seek testing. This study investigates the decision to pursue the possibility of testing. Individuals (one group who had already contemplated the possibility of predictive testing and one group who had not) were asked to consider predictive testing for several diseases. They rated the likelihood of opting for testing and specified the reasons which they believed had affected their decision. The ratio of the numbers of reasons stated for testing and the numbers of reasons stated against testing was a good predictor of the stated likelihood of testing, particularly when the reasons were weighted by utility (importance). Those who had previously contemplated testing specified more emotional reasons. It is proposed that the decision process is internally logical although it may seem illogical to others due to there being idiosyncratic premises (or reasons) upon which the decision is based. It is concluded that the Utility Theory is a useful basis for describing how people make decisions related to predictive testing; modifications of the theory are proposed.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Spatial scan statistics for detection of multiple clusters with arbitrary shapes.
Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray
2016-12-01
In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
Liquid Water Oceans in Ice Giants
NASA Technical Reports Server (NTRS)
Wiktorowicz, Sloane J.; Ingersoll, Andrew P.
2007-01-01
Aptly named, ice giants such as Uranus and Neptune contain significant amounts of water. While this water cannot be present near the cloud tops, it must be abundant in the deep interior. We investigate the likelihood of a liquid water ocean existing in the hydrogen-rich region between the cloud tops and deep interior. Starting from an assumed temperature at a given upper tropospheric pressure (the photosphere), we follow a moist adiabat downward. The mixing ratio of water to hydrogen in the gas phase is small in the photosphere and increases with depth. The mixing ratio in the condensed phase is near unity in the photosphere and decreases with depth; this gives two possible outcomes. If at some pressure level the mixing ratio of water in the gas phase is equal to that in the deep interior, then that level is the cloud base. The gas below the cloud base has constant mixing ratio. Alternately, if the mixing ratio of water in the condensed phase reaches that in the deep interior, then the surface of a liquid ocean will occur. Below this ocean surface, the mixing ratio of water will be constant. A cloud base occurs when the photospheric temperature is high. For a family of ice giants with different photospheric temperatures, the cooler ice giants will have warmer cloud bases. For an ice giant with a cool enough photospheric temperature, the cloud base will exist at the critical temperature. For still cooler ice giants, ocean surfaces will result. A high mixing ratio of water in the deep interior favors a liquid ocean. We find that Neptune is both too warm (photospheric temperature too high) and too dry (mixing ratio of water in the deep interior too low) for liquid oceans to exist at present. To have a liquid ocean, Neptune s deep interior water to gas ratio would have to be higher than current models allow, and the density at 19 kbar would have to be approx. equal to 0.8 g/cu cm. Such a high density is inconsistent with gravitational data obtained during the Voyager flyby. In our model, Neptune s water cloud base occurs around 660 K and 11 kbar, and the density there is consistent with Voyager gravitational data. As Neptune cools, the probability of a liquid ocean increases. Extrasolar "hot Neptunes," which presumably migrate inward toward their parent stars, cannot harbor liquid water oceans unless they have lost almost all of the hydrogen and helium from their deep interiors.
Application of the Elaboration Likelihood Model of Attitude Change to Assertion Training.
ERIC Educational Resources Information Center
Ernst, John M.; Heesacker, Martin
1993-01-01
College students (n=113) participated in study comparing effects of elaboration likelihood model (ELM) based assertion workshop with those of typical assertion workshop. ELM-based workshop was significantly better at producing favorable attitude change, greater intention to act assertively, and more favorable evaluations of workshop content.…
How the economic recession has changed the likelihood of reporting poor self-rated health in Spain.
Arroyo, Elena; Renart, Gemma; Saez, Marc
2015-12-18
Between 2006 and 2011 self-rated health (SRH) (the subjective report of an individual's health status) actually improved in Spain despite its being in the grips of a serious economic recession. This study examines whether the likelihood of reporting poor health has changed because of the global financial crisis. It also attempts to estimate the differences between SRH and other self-perceived measures of health among groups before and during the current economic crisis in Spain. Cross-sectional population-based surveys were conducted in Spain (ENSE 2006 and ENSE 2011) and in Catalonia (ESCA 2006 and ESCA 2011) in 2006 and again in 2011. In this research work we have used random effects logistic models (dependent variable SRH 1 Poor, 0 Good) and exact matching and propensity score-matching. The results of the ENSE explanatory variables are the same in both 2006 and 2011. In other words, all diseases negatively affect SRH, whereas alcohol habits positively affect SRH and obesity is the only disease unrelated to SRH. ESCA explanatory variables' results show that in 2006 all diseases are significant and have large odds ratio (OR) and consequently those individuals suffering from any of these diseases are more likely to report poor health. In 2011 the same pattern follows with the exception of allergies, obesity, high cholesterol and hypertension, albeit they are not statistically significant. Drinking habits had a positive effect on SRH in 2006 and 2011, whereas smoking is considered as unrelated to SRH. The likelihood of reporting poor health in 2006 is added as a variable in with the logistic regression of 2011 and is not, in either the ENSE data or the ESCA data, significant. Furthermore, neither is it significant when controlling by age, gender, employment status or education. The results of our analysis show that the financial crisis did not alter the likelihood of reporting poor health in 2011. Therefore, there are no differences between our perceived health in either 2006 or in 2011.
Risk factors for classical hysterotomy by gestational age.
Osmundson, Sarah S; Garabedian, Matthew J; Lyell, Deirdre J
2013-10-01
To examine the likelihood of classical hysterotomy across preterm gestational ages and to identify factors that increase its occurrence. This is a secondary analysis of a prospective observational cohort collected by the Maternal-Fetal Medicine Network of all women with singleton gestations who underwent a cesarean delivery with a known hysterotomy. Comparisons were made based on gestational age. Factors thought to influence hysterotomy type were studied, including maternal age, body mass index, parity, birth weight, small for gestational age (SGA) status, fetal presentation, labor preceding delivery, and emergent delivery. Approximately 36,000 women were eligible for analysis, of whom 34,454 (95.7%) underwent low transverse hysterotomy and 1,562 (4.3%) underwent classical hysterotomy. The median gestational age of women undergoing a classical hysterotomy was 32 weeks and the incidence peaked between 24 0/7 weeks and 25 6/7 weeks (53.2%), declining with each additional week of gestation thereafter (P for trend <.001). In multivariable regression, the likelihood of classical hysterotomy was increased with SGA (n=258; odds ratio [OR] 2.71; confidence interval [CI] 1.78-4.13), birth weight 1,000 g or less (n=467; OR 1.51; CI 1.03-2.24), and noncephalic presentation (n=783; OR 2.03; CI 1.52-2.72). The likelihood of classical hysterotomy was decreased between 23 0/7 and 27 6/7 weeks of gestation and after 32 weeks of gestation when labor preceded delivery, and increased between 28 0/7 and 31 6/7 weeks of gestation and after 32 weeks of gestation by multiparity and previous cesarean delivery. Emergent delivery did not predict classical hysterotomy. Fifty percent of women at 23-26 weeks of gestation who undergo cesarean delivery have a classical hysterotomy, and the risk declines steadily thereafter. This likelihood is increased by fetal factors, especially SGA and noncephalic presentation. : II.
Liu, Wen-Jun; Li, Gui-Zhen; Liu, Hai-Feng; Lei, Jun-Qiang
2018-04-01
We sought to perform a meta-analysis to comprehensively evaluate the diagnostic accuracy of dual-source computed tomography angiography (DSCTA) in detecting coronary in-stent restenosis (CISR) when compared to invasive coronary angiography. The stent-based research studies in which DSCTA was used as diagnostic tool for CISR, as recent as of October 2017, from several reputed scientific libraries (PubMed, Embase, Scopus, The Cochrane Library, and Web of Science) were evaluated. Study inclusion, data extraction, and risk bias assessment were conducted by two researchers independently. Pooled sensitivity (SEN), specificity (SPE), positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and area under summary receiver operator characteristics (SROC) curve (AUC) were calculated to assess the diagnostic value. In addition, heterogeneity and subgroup analysis were also carried out. A total of 13 studies with a total of 894 patients and 1384 assessable stents were included. The pooled results of DSCTA diagnosing CISR were as follows: SEN 0.92 (95% confidence interval [CI] 0.87-0.96), SPE 0.91 (95% CI 0.87-0.94), PLR 9.83 (95% CI 6.93-13.94), NLR 0.09 (95% CI 0.05-0.15), DOR 114.73 (95% CI 64.12-205.28), and AUC 0.97 (95% CI 0.95-0.98), respectively. The subgroup analysis result suggested that DSTCA performed significantly better in CISR detection when the stent diameter was ≥3 mm compared with the stent diameter <3 mm: (0.98 [0.97-0.99] vs 0.82 [0.79-0.86]) with P < .05. This study revealed that DSCTA has excellent diagnostic performance for detecting CISR and may serve as an alternative for further patient evaluation with CISR, especially for stent diameter ≥3 mm. © 2018 Wiley Periodicals, Inc.
Psychopathology among New York city public school children 6 months after September 11.
Hoven, Christina W; Duarte, Cristiane S; Lucas, Christopher P; Wu, Ping; Mandell, Donald J; Goodwin, Renee D; Cohen, Michael; Balaban, Victor; Woodruff, Bradley A; Bin, Fan; Musa, George J; Mei, Lori; Cantor, Pamela A; Aber, J Lawrence; Cohen, Patricia; Susser, Ezra
2005-05-01
Children exposed to a traumatic event may be at higher risk for developing mental disorders. The prevalence of child psychopathology, however, has not been assessed in a population-based sample exposed to different levels of mass trauma or across a range of disorders. To determine prevalence and correlates of probable mental disorders among New York City, NY, public school students 6 months following the September 11, 2001, World Trade Center attack. Survey. New York City public schools. A citywide, random, representative sample of 8236 students in grades 4 through 12, including oversampling in closest proximity to the World Trade Center site (ground zero) and other high-risk areas. Children were screened for probable mental disorders with the Diagnostic Interview Schedule for Children Predictive Scales. One or more of 6 probable anxiety/depressive disorders were identified in 28.6% of all children. The most prevalent were probable agoraphobia (14.8%), probable separation anxiety (12.3%), and probable posttraumatic stress disorder (10.6%). Higher levels of exposure correspond to higher prevalence for all probable anxiety/depressive disorders. Girls and children in grades 4 and 5 were the most affected. In logistic regression analyses, child's exposure (adjusted odds ratio, 1.62), exposure of a child's family member (adjusted odds ratio, 1.80), and the child's prior trauma (adjusted odds ratio, 2.01) were related to increased likelihood of probable anxiety/depressive disorders. Results were adjusted for different types of exposure, sociodemographic characteristics, and child mental health service use. A high proportion of New York City public school children had a probable mental disorder 6 months after September 11, 2001. The data suggest that there is a relationship between level of exposure to trauma and likelihood of child anxiety/depressive disorders in the community. The results support the need to apply wide-area epidemiological approaches to mental health assessment after any large-scale disaster.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
2004-03-01
Allison , Logistic Regression: Using the SAS System (Cary, NC: SAS Institute, Inc, 2001), 57. 23 using the likelihood ratio that SAS generates...21, respectively. 33 Jesse M. Rothstein, College Performance Predictions and the SAT ( Berkely , CA: UC
On the use of the likelihood ratio for forensic evaluation: response to Fenton et al.
Biedermann, Alex; Hicks, Tacha; Taroni, Franco; Champod, Christophe; Aitken, Colin
2014-07-01
This letter to the Editor comments on the article When 'neutral' evidence still has probative value (with implications from the Barry George Case) by N. Fenton et al. [[1], 2014]. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Diffuse Prior Monotonic Likelihood Ratio Test for Evaluation of Fused Image Quality Measures
2011-02-01
852–864. [25] W. Mendenhall , R. L. Scheaffer, and D. D. Wackerly, Mathematical Statistics With Applications, 3rd ed. Boston, MA: Duxbury Press, 1986...Professor and holds the Robert W. Wieseman Chaired Research Professorship in Electrical Engi- neering. His research interests include signal
Human Behavior Drift Detection in a Smart Home Environment.
Masciadri, Andrea; Trofimova, Anna A; Matteucci, Matteo; Salice, Fabio
2017-01-01
The proposed system aims at elderly people independent living by providing an early indicator of habits changes which might be relevant for a diagnosis of diseases. It relies on Hidden Markov Model to describe the behavior observing sensors data, while Likelihood Ratio Test gives the variation within different time periods.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
RELATIONSHIP FORMATION AND STABILITY IN EMERGING ADULTHOOD: DO SEX RATIOS MATTER?
Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.
2013-01-01
Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage—transitions in and out of dating or cohabiting relationships—is unknown. Utilizing data from the Toledo Adolescent Relationships Study (TARS) and the 2000 census, this study assesses whether sex ratios influence the formation and stability of emerging adults’ romantic relationships. Findings show that relationship formation is unaffected by partner availability, yet the presence of partners increases women’s odds of cohabiting, decreases men’s odds of cohabiting, and increases number of dating partners and cheating among men. It appears that sex ratios influence not only transitions in and out of marriage, but also the process through which individuals search for and evaluate partners prior to marriage. PMID:24265510
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
Project STYLE: a multisite RCT for HIV prevention among youths in mental health treatment.
Brown, Larry K; Hadley, Wendy; Donenberg, Geri R; DiClemente, Ralph J; Lescano, Celia; Lang, Delia M; Crosby, Richard; Barker, David; Oster, Danielle
2014-03-01
The study examined the efficacy of family-based and adolescent-only HIV prevention programs in decreasing HIV risk and improving parental monitoring and sexual communication among youths in mental health treatment. A randomized controlled trial (RCT) with 721 adolescents (ages 13-18 years) and their caregivers from mental health settings in three U.S. cities were randomly assigned to one of three theory-based, structured group interventions: family-based HIV prevention, adolescent-only HIV prevention, and adolescent-only health promotion. Interventions were delivered during an all-day workshop. Assessments were completed at baseline and three months postintervention. Compared with those in the health intervention, adolescents in the HIV prevention interventions reported fewer unsafe sex acts (adjusted rate ratio=.49, p=.01), greater condom use (adjusted relative change=59%, p=.01), and greater likelihood of avoiding sex (adjusted odds ratio=1.44, p=.05). They also showed improved HIV knowledge (p<.01) and self-efficacy (p<.05). The family-based intervention, compared with the other interventions, produced significant improvements in parent-teen sexual communication (p<.01), parental monitoring (p<.01), and parental permissiveness (p=.05). This RCT found that the HIV prevention interventions reduced sexual risk behavior over three months in a large, diverse sample of youths in mental health treatment and that the family-based intervention improved parental monitoring and communication with teens about sex. These interventions show promise.
Henry, Brandon Michael; Roy, Joyeeta; Ramakrishnan, Piravin Kumar; Vikse, Jens; Tomaszewski, Krzysztof A; Walocha, Jerzy A
2016-07-01
Several studies have explored the use of serum procalcitonin (PCT) in differentiating between bacterial and viral etiologies in children with suspected meningitis. We pooled these studies into a meta-analysis to determine the PCT diagnostic accuracy. All major databases were searched through March 2015. No date or language restrictions were applied. Eight studies (n = 616 pediatric patients) were included. Serum PCT assay was found to be very accurate for differentiating the etiology of pediatric meningitis with pooled sensitivity and specificity of 0.96 (95% CI = 0.92-0.98) and 0.89 (95% CI = 0.86-0.92), respectively. The pooled positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR), and area under the curve (AUC) for PCT were 7.5 (95% CI = 5.6-10.1), 0.08(95% CI = 0.04-0.14), 142.3 (95% CI = 59.5-340.4), and 0.97 (SE = 0.01), respectively. In 6 studies, PCT was found to be superior than CRP, whose DOR was only 16.7 (95%CI = 8.8-31.7). Our meta-analysis demonstrates that serum PCT assay is a highly accurate and powerful test for rapidly differentiating between bacterial and viral meningitis in children. © The Author(s) 2015.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Generalized likelihood ratios for quantitative diagnostic test scores.
Tandberg, D; Deely, J J; O'Malley, A J
1997-11-01
The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.
Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.
Brase, Gary L; Walker, Gary
2004-06-01
Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Savage, Nathan J; Fritz, Julie M; Thackeray, Anne
2014-07-01
Cross-sectional diagnostic accuracy study. To investigate the relationship between history and physical examination findings and the outcome of electrodiagnostic testing in patients with sciatica referred to physical therapy. Electrodiagnostic testing is routinely used to evaluate patients with sciatica. Recent evidence suggests that the presence of radiculopathy identified with electrodiagnostic testing may predict better functional outcomes in these patients. While some patient history and physical examination findings have been shown to predict the presence of disc herniation or neurological insult, little is known about their relationship to the results of electrodiagnostic testing. Electrodiagnostic testing was performed on 38 patients with sciatica who participated in a randomized trial that compared different physical therapy interventions. The diagnostic gold standard was the presence or absence of radiculopathy, based on the results of the needle electromyographic examination. Diagnostic sensitivity and specificity values were calculated, along with corresponding likelihood ratios, for select patient history and physical examination variables. No significant relationship was found between select patient history and physical examination findings, analyzed individually or in combination, and the outcome of electrodiagnostic testing. Diagnostic sensitivity values ranged from 0.03 (95% confidence interval [CI]: 0.00, 0.24) to a high of 0.95 (95% CI: 0.72, 0.99), and specificity values ranged from 0.10 (95% CI: 0.02, 0.34) to a high of 0.95 (95% CI: 0.72, 0.99). Positive likelihood ratios ranged from 0.15 (95% CI: 0.01, 2.87) to a high of 2.33 (95% CI: 0.71, 7.70), and negative likelihood ratios ranged from 2.00 (95% CI: 0.35, 11.48) to a low of 0.50 (95% CI: 0.03, 8.10). In this investigation, the relationship between patient history and physical examination findings and the outcome of electrodiagnostic testing among patients with sciatica was not found to be statistically significant or clinically meaningful. However, given the small sample size and corresponding large CIs, these results should be considered with caution, recognizing that some of the history and physical examination variables may prove useful in future research. These findings suggest that electrodiagnostic testing is essential to identify the subgroup of patients with sciatica who have measurable nerve injury consistent with radiculopathy, which may be an important prognostic factor for recovery. Level of Evidence Diagnosis, level 3b-. J Orthop Sports Phys Ther 2014;44(7):508-517. Epub 22 May 2014. doi:10.2519/jospt.2014.5002.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.