Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A
2013-11-01
To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Optimal Methods for Classification of Digitally Modulated Signals
2013-03-01
of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test
Investigation of Acoustic Structure Quantification in the Diagnosis of Thyroiditis.
Park, Jisang; Hong, Hyun Sook; Kim, Chul-Hee; Lee, Eun Hye; Jeong, Sun Hye; Lee, A Leum; Lee, Heon
2016-03-01
The objective of this study was to evaluate the ability of acoustic structure quantification (ASQ) to diagnose thyroiditis. The echogenicity of 439 thyroid lobes, as determined using ASQ, was quantified and analyzed retrospectively. Thyroiditis was categorized into five subgroups. The results were presented in a modified chi-square histogram as the mode, average, ratio, blue mode, and blue average. We determined the cutoff values of ASQ from ROC analysis to detect and differentiate thyroiditis from a normal thyroid gland. We obtained data on the sensitivity and specificity of the cutoff values to distinguish between euthyroid patients with thyroiditis and patients with a normal thyroid gland. The mean ASQ values for patients with thyroiditis were statistically significantly greater than those for patients with a normal thyroid gland (p < 0.001). The AUCs were as follows: 0.93 for the ratio, 0.91 for the average, 0.90 for the blue average, 0.87 for the mode, and 0.87 for the blue mode. For the diagnosis of thyroiditis, the cutoff values were greater than 0.27 for the ratio, greater than 116.7 for the mean, and greater than 130.7 for the blue average. The sensitivities and specificities were as follows: 84.0% and 96.6% for the ratio, 85.3% and 83.0%, for the average, and 79.1% and 93.2% for the blue average, respectively. The ASQ parameters were successful in distinguishing patients with thyroiditis from patients with a normal thyroid gland, with likelihood ratios of 24.7 for the ratio, 5.0 for the average, and 11.6 for the blue average. With the use of the aforementioned cutoff values, the sensitivities and specificities for distinguishing between patients with thyroiditis and euthyroid patients without thyroiditis were 77.05% and 94.92% for the ratio, 85.25% and 82.20% for the average, and 77.05% and 92.37% for the blue average, respectively. ASQ can provide objective and quantitative analysis of thyroid echogenicity. ASQ parameters were successful in distinguishing between patients with thyroiditis and individuals without thyroiditis, with likelihood ratios of 24.7 for the ratio, 5.0 for the average, and 11.6 for the blue average.
Sinharay, Sandip
2017-09-01
Benefiting from item preknowledge is a major type of fraudulent behavior during educational assessments. Belov suggested the posterior shift statistic for detection of item preknowledge and showed its performance to be better on average than that of seven other statistics for detection of item preknowledge for a known set of compromised items. Sinharay suggested a statistic based on the likelihood ratio test for detection of item preknowledge; the advantage of the statistic is that its null distribution is known. Results from simulated and real data and adaptive and nonadaptive tests are used to demonstrate that the Type I error rate and power of the statistic based on the likelihood ratio test are very similar to those of the posterior shift statistic. Thus, the statistic based on the likelihood ratio test appears promising in detecting item preknowledge when the set of compromised items is known.
Average Likelihood Methods for Code Division Multiple Access (CDMA)
2014-05-01
lengths in the range of 22 to 213 and possibly higher. Keywords: DS / CDMA signals, classification, balanced CDMA load, synchronous CDMA , decision...likelihood ratio test (ALRT). We begin this classification problem by finding the size of the spreading matrix that generated the DS - CDMA signal. As...Theoretical Background The classification of DS / CDMA signals should not be confused with the problem of multiuser detection. The multiuser detection deals
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding
NASA Technical Reports Server (NTRS)
Mahmoud, Saad; Hi, Jianjun
2012-01-01
The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.
On the Nature of SEM Estimates of ARMA Parameters.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2002-01-01
Reexamined the nature of structural equation modeling (SEM) estimates of autoregressive moving average (ARMA) models, replicated the simulation experiments of P. Molenaar, and examined the behavior of the log-likelihood ratio test. Simulation studies indicate that estimates of ARMA parameters observed with SEM software are identical to those…
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
ERIC Educational Resources Information Center
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
Genetic and phenotypic parameter estimates for feed intake and other traits in growing beef cattle
USDA-ARS?s Scientific Manuscript database
Genetic parameters for dry matter intake (DMI), residual feed intake (RFI), average daily gain (ADG), mid-period body weight (MBW), gain to feed ratio (G:F) and flight speed (FS) were estimated using 1165 steers from a mixed-breed population using restricted maximum likelihood methodology applied to...
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Matthews, Lynn T; Ribaudo, Heather B; Kaida, Angela; Bennett, Kara; Musinguzi, Nicholas; Siedner, Mark J; Kabakyenga, Jerome; Hunt, Peter W; Martin, Jeffrey N; Boum, Yap; Haberer, Jessica E; Bangsberg, David R
2016-04-01
HIV-infected women risk sexual and perinatal HIV transmission during conception, pregnancy, childbirth, and breastfeeding. We compared HIV-1 RNA suppression and medication adherence across periconception, pregnancy, and postpartum periods, among women on antiretroviral therapy (ART) in Uganda. We analyzed data from women in a prospective cohort study, aged 18-49 years, enrolled at ART initiation and with ≥1 pregnancy between 2005 and 2011. Participants were seen quarterly. The primary exposure of interest was pregnancy period, including periconception (3 quarters before pregnancy), pregnancy, postpartum (6 months after pregnancy outcome), or nonpregnancy related. Regression models using generalized estimating equations compared the likelihood of HIV-1 RNA ≤400 copies per milliliter, <80% average adherence based on electronic pill caps (medication event monitoring system), and likelihood of 72-hour medication gaps across each period. One hundred eleven women contributed 486 person-years of follow-up. Viral suppression was present at 89% of nonpregnancy, 97% of periconception, 93% of pregnancy, and 89% of postpartum visits, and was more likely during periconception (adjusted odds ratio, 2.15) compared with nonpregnant periods. Average ART adherence was 90% [interquartile range (IQR), 70%-98%], 93% (IQR, 82%-98%), 92% (IQR, 72%-98%), and 88% (IQR, 63%-97%) during nonpregnant, periconception, pregnant, and postpartum periods, respectively. Average adherence <80% was less likely during periconception (adjusted odds ratio, 0.68), and 72-hour gaps per 90 days were less frequent during periconception (adjusted relative risk, 0.72) and more frequent during postpartum (adjusted relative risk, 1.40). Women with pregnancy were virologically suppressed at most visits, with an increased likelihood of suppression and high adherence during periconception follow-up. Increased frequency of 72-hour gaps suggests a need for increased adherence support during postpartum periods.
Choosing relatives for DNA identification of missing persons.
Ge, Jianye; Budowle, Bruce; Chakraborty, Ranajit
2011-01-01
DNA-based analysis is integral to missing person identification cases. When direct references are not available, indirect relative references can be used to identify missing persons by kinship analysis. Generally, more reference relatives render greater accuracy of identification. However, it is costly to type multiple references. Thus, at times, decisions may need to be made on which relatives to type. In this study, pedigrees for 37 common reference scenarios with 13 CODIS STRs were simulated to rank the information content of different combinations of relatives. The results confirm that first-order relatives (parents and fullsibs) are the most preferred relatives to identify missing persons; fullsibs are also informative. Less genetic dependence between references provides a higher on average likelihood ratio. Distant relatives may not be helpful solely by autosomal markers. But lineage-based Y chromosome and mitochondrial DNA markers can increase the likelihood ratio or serve as filters to exclude putative relationships. © 2010 American Academy of Forensic Sciences.
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Likelihood Ratio Tests for Special Rasch Models
ERIC Educational Resources Information Center
Hessen, David J.
2010-01-01
In this article, a general class of special Rasch models for dichotomous item scores is considered. Although Andersen's likelihood ratio test can be used to test whether a Rasch model fits to the data, the test does not differentiate between special Rasch models. Therefore, in this article, new likelihood ratio tests are proposed for testing…
Cha, Kenny H.; Hadjiiski, Lubomir; Samala, Ravi K.; Chan, Heang-Ping; Caoili, Elaine M.; Cohan, Richard H.
2016-01-01
Purpose: The authors are developing a computerized system for bladder segmentation in CT urography (CTU) as a critical component for computer-aided detection of bladder cancer. Methods: A deep-learning convolutional neural network (DL-CNN) was trained to distinguish between the inside and the outside of the bladder using 160 000 regions of interest (ROI) from CTU images. The trained DL-CNN was used to estimate the likelihood of an ROI being inside the bladder for ROIs centered at each voxel in a CTU case, resulting in a likelihood map. Thresholding and hole-filling were applied to the map to generate the initial contour for the bladder, which was then refined by 3D and 2D level sets. The segmentation performance was evaluated using 173 cases: 81 cases in the training set (42 lesions, 21 wall thickenings, and 18 normal bladders) and 92 cases in the test set (43 lesions, 36 wall thickenings, and 13 normal bladders). The computerized segmentation accuracy using the DL likelihood map was compared to that using a likelihood map generated by Haar features and a random forest classifier, and that using our previous conjoint level set analysis and segmentation system (CLASS) without using a likelihood map. All methods were evaluated relative to the 3D hand-segmented reference contours. Results: With DL-CNN-based likelihood map and level sets, the average volume intersection ratio, average percent volume error, average absolute volume error, average minimum distance, and the Jaccard index for the test set were 81.9% ± 12.1%, 10.2% ± 16.2%, 14.0% ± 13.0%, 3.6 ± 2.0 mm, and 76.2% ± 11.8%, respectively. With the Haar-feature-based likelihood map and level sets, the corresponding values were 74.3% ± 12.7%, 13.0% ± 22.3%, 20.5% ± 15.7%, 5.7 ± 2.6 mm, and 66.7% ± 12.6%, respectively. With our previous CLASS with local contour refinement (LCR) method, the corresponding values were 78.0% ± 14.7%, 16.5% ± 16.8%, 18.2% ± 15.0%, 3.8 ± 2.3 mm, and 73.9% ± 13.5%, respectively. Conclusions: The authors demonstrated that the DL-CNN can overcome the strong boundary between two regions that have large difference in gray levels and provides a seamless mask to guide level set segmentation, which has been a problem for many gradient-based segmentation methods. Compared to our previous CLASS with LCR method, which required two user inputs to initialize the segmentation, DL-CNN with level sets achieved better segmentation performance while using a single user input. Compared to the Haar-feature-based likelihood map, the DL-CNN-based likelihood map could guide the level sets to achieve better segmentation. The results demonstrate the feasibility of our new approach of using DL-CNN in combination with level sets for segmentation of the bladder. PMID:27036584
Likelihood Ratios for the Emergency Physician.
Peng, Paul; Coyle, Andrew
2018-04-26
The concept of likelihood ratios was introduced more than 40 years ago, yet this powerful metric has still not seen wider application or discussion in the medical decision-making process. There is concern that clinicians-in-training are still being taught an over-simplified approach to diagnostic test performance, and have limited exposure to likelihood ratios. Even for those familiar with likelihood ratios, they might perceive them as mathematically-cumbersome in application, if not difficult to determine for a particular disease process. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
A Sm-Nd isotopic study of atmospheric dusts and particulates from major river systems
NASA Technical Reports Server (NTRS)
Goldstein, S. L.; Onions, R. K.; Hamilton, P. J.
1984-01-01
Nd-143/Nd-144 ratios, together with Sm and Nd abundances, are given for particulates from major and minor rivers as well as continental sediments and aeolian dusts collected over the Atlantic, Pacific, and Indian Oceans. In combination with data from the literature, the present results have implications for the age, history, and composition of the sedimentary mass and the continental crust. It is noted that the average ratio of Sm/Nd is about 0.19 in the upper continental crust, and has remained so since the early Archean, thereby precluding the likelihood of major mafic-to-felsic or felsic-to-mafic trends in the overall composition of the upper continental crust through earth history. The average 'crustal residence age' of the entire sedimentary mass is about 1.9 Ga.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Newman, Phil; Adams, Roger; Waddington, Gordon
2012-09-01
To examine the relationship between two clinical test results and future diagnosis of (Medial Tibial Stress Syndrome) MTSS in personnel at a military trainee establishment. Data from a preparticipation musculoskeletal screening test performed on 384 Australian Defence Force Academy Officer Cadets were compared against 693 injuries reported by 326 of the Officer Cadets in the following 16 months. Data were held in an Injury Surveillance database and analysed using χ² and Fisher's Exact tests, and Receiver Operating Characteristic Curve analysis. Diagnosis of MTSS, confirmed by an independent blinded health practitioner. Both the palpation and oedema clinical tests were each found to be significant predictors for later onset of MTSS. Specifically: Shin palpation test OR 4.63, 95% CI 2.5 to 8.5, Positive Likelihood Ratio 3.38, Negative Likelihood Ratio 0.732, Pearson χ² p<0.001; Shin oedema test OR 76.1 95% CI 9.6 to 602.7, Positive Likelihood Ratio 7.26, Negative Likelihood Ratio 0.095, Fisher's Exact p<0.001; Combined Shin Palpation Test and Shin Oedema Test Positive Likelihood Ratio 7.94, Negative Likelihood Ratio <0.001, Fisher's Exact p<0.001. Female gender was found to be an independent risk factor (OR 2.97, 95% CI 1.66 to 5.31, Positive Likelihood Ratio 2.09, Negative Likelihood Ratio 0.703, Pearson χ² p<0.001) for developing MTSS. The tests for MTSS employed here are components of a normal clinical examination used to diagnose MTSS. This paper confirms that these tests and female gender can also be confidently applied in predicting those in an asymptomatic population who are at greater risk of developing MTSS symptoms with activity at some point in the future.
2015-01-01
This research has the purpose to establish a foundation for new classification and estimation of CDMA signals. Keywords: DS / CDMA signals, BPSK, QPSK...DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK AND QPSK SYMBOLS JANUARY 2015...To) OCT 2013 – OCT 2014 4. TITLE AND SUBTITLE DEVELOPMENT OF THE AVERAGE LIKELIHOOD FUNCTION FOR CODE DIVISION MULTIPLE ACCESS ( CDMA ) USING BPSK
The likelihood ratio as a random variable for linked markers in kinship analysis.
Egeland, Thore; Slooten, Klaas
2016-11-01
The likelihood ratio is the fundamental quantity that summarizes the evidence in forensic cases. Therefore, it is important to understand the theoretical properties of this statistic. This paper is the last in a series of three, and the first to study linked markers. We show that for all non-inbred pairwise kinship comparisons, the expected likelihood ratio in favor of a type of relatedness depends on the allele frequencies only via the number of alleles, also for linked markers, and also if the true relationship is another one than is tested for by the likelihood ratio. Exact expressions for the expectation and variance are derived for all these cases. Furthermore, we show that the expected likelihood ratio is a non-increasing function if the recombination rate increases between 0 and 0.5 when the actual relationship is the one investigated by the LR. Besides being of theoretical interest, exact expressions such as obtained here can be used for software validation as they allow to verify the correctness up to arbitrary precision. The paper also presents results and advice of practical importance. For example, we argue that the logarithm of the likelihood ratio behaves in a fundamentally different way than the likelihood ratio itself in terms of expectation and variance, in agreement with its interpretation as weight of evidence. Equipped with the results presented and freely available software, one may check calculations and software and also do power calculations.
Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin
2016-01-01
In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680
Wang, Liang; Xia, Yu; Jiang, Yu-Xin; Dai, Qing; Li, Xiao-Yi
2012-11-01
To assess the efficacy of sonography for discriminating nodular Hashimoto thyroiditis from papillary thyroid carcinoma in patients with sonographically evident diffuse Hashimoto thyroiditis. This study included 20 patients with 24 surgically confirmed Hashimoto thyroiditis nodules and 40 patients with 40 papillary thyroid carcinoma nodules; all had sonographically evident diffuse Hashimoto thyroiditis. A retrospective review of the sonograms was performed, and significant benign and malignant sonographic features were selected by univariate and multivariate analyses. The combined likelihood ratio was calculated as the product of each feature's likelihood ratio for papillary thyroid carcinoma. We compared the abilities of the original sonographic features and combined likelihood ratios in diagnosing nodular Hashimoto thyroiditis and papillary thyroid carcinoma by their sensitivity, specificity, and Youden index. The diagnostic capabilities of the sonographic features varied greatly, with Youden indices ranging from 0.175 to 0.700. Compared with single features, combinations of features were unable to improve the Youden indices effectively because the sensitivity and specificity usually changed in opposite directions. For combined likelihood ratios, however, the sensitivity improved greatly without an obvious reduction in specificity, which resulted in the maximum Youden index (0.825). With a combined likelihood ratio greater than 7.00 as the diagnostic criterion for papillary thyroid carcinoma, sensitivity reached 82.5%, whereas specificity remained at 100.0%. With a combined likelihood ratio less than 1.00 for nodular Hashimoto thyroiditis, sensitivity and specificity were 90.0% and 92.5%, respectively. Several sonographic features of nodular Hashimoto thyroiditis and papillary thyroid carcinoma in a background of diffuse Hashimoto thyroiditis were significantly different. The combined likelihood ratio may be superior to original sonographic features for discrimination of nodular Hashimoto thyroiditis from papillary thyroid carcinoma; therefore, it is a promising risk index for thyroid nodules and warrants further investigation.
NASA Technical Reports Server (NTRS)
Cash, W.
1979-01-01
Many problems in the experimental estimation of parameters for models can be solved through use of the likelihood ratio test. Applications of the likelihood ratio, with particular attention to photon counting experiments, are discussed. The procedures presented solve a greater range of problems than those currently in use, yet are no more difficult to apply. The procedures are proved analytically, and examples from current problems in astronomy are discussed.
A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics
2007-05-01
findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each
Measuring coherence of computer-assisted likelihood ratio methods.
Haraksim, Rudolf; Ramos, Daniel; Meuwly, Didier; Berger, Charles E H
2015-04-01
Measuring the performance of forensic evaluation methods that compute likelihood ratios (LRs) is relevant for both the development and the validation of such methods. A framework of performance characteristics categorized as primary and secondary is introduced in this study to help achieve such development and validation. Ground-truth labelled fingerprint data is used to assess the performance of an example likelihood ratio method in terms of those performance characteristics. Discrimination, calibration, and especially the coherence of this LR method are assessed as a function of the quantity and quality of the trace fingerprint specimen. Assessment of the coherence revealed a weakness of the comparison algorithm in the computer-assisted likelihood ratio method used. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Interpreting DNA mixtures with the presence of relatives.
Hu, Yue-Qing; Fung, Wing K
2003-02-01
The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1990-01-01
A real-time digital signal combining system for use with Ka-band feed arrays is proposed. The combining system attempts to compensate for signal-to-noise ratio (SNR) loss resulting from antenna deformations induced by gravitational and atmospheric effects. The combining weights are obtained directly from the observed samples by using a sliding-window implementation of a vector maximum-likelihood parameter estimator. It is shown that with averaging times of about 0.1 second, combining loss for a seven-element array can be limited to about 0.1 dB in a realistic operational environment. This result suggests that the real-time combining system proposed here is capable of recovering virtually all of the signal power captured by the feed array, even in the presence of severe wind gusts and similar disturbances.
Explaining the effect of event valence on unrealistic optimism.
Gold, Ron S; Brown, Mark G
2009-05-01
People typically exhibit 'unrealistic optimism' (UO): they believe they have a lower chance of experiencing negative events and a higher chance of experiencing positive events than does the average person. UO has been found to be greater for negative than positive events. This 'valence effect' has been explained in terms of motivational processes. An alternative explanation is provided by the 'numerosity model', which views the valence effect simply as a by-product of a tendency for likelihood estimates pertaining to the average member of a group to increase with the size of the group. Predictions made by the numerosity model were tested in two studies. In each, UO for a single event was assessed. In Study 1 (n = 115 students), valence was manipulated by framing the event either negatively or positively, and participants estimated their own likelihood and that of the average student at their university. In Study 2 (n = 139 students), valence was again manipulated and participants again estimated their own likelihood; additionally, group size was manipulated by having participants estimate the likelihood of the average student in a small, medium-sized, or large group. In each study, the valence effect was found, but was due to an effect on estimates of own likelihood, not the average person's likelihood. In Study 2, valence did not interact with group size. The findings contradict the numerosity model, but are in accord with the motivational explanation. Implications for health education are discussed.
Skewed sex ratios in India: "physician, heal thyself".
Patel, Archana B; Badhoniya, Neetu; Mamtani, Manju; Kulkarni, Hemant
2013-06-01
Sex selection, a gender discrimination of the worst kind, is highly prevalent across all strata of Indian society. Physicians have a crucial role in this practice and implementation of the Indian Government's Pre-Natal Diagnostic Techniques Act in 1996 to prevent the misuse of ultrasound techniques for the purpose of prenatal sex determination. Little is known about family preferences, let alone preferences among families of physicians. We investigated the sex ratios in 946 nuclear families with 1,624 children, for which either one or both parents were physicians. The overall child sex ratio was more skewed than the national average of 914. The conditional sex ratios decreased with increasing number of previous female births, and a previous birth of a daughter in the family was associated with a 38 % reduced likelihood of a subsequent female birth. The heavily skewed sex ratios in the families of physicians are indicative of a deeply rooted social malady that could pose a critical challenge in correcting the sex ratios in India.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
Maximum likelihood estimation of signal-to-noise ratio and combiner weight
NASA Technical Reports Server (NTRS)
Kalson, S.; Dolinar, S. J.
1986-01-01
An algorithm for estimating signal to noise ratio and combiner weight parameters for a discrete time series is presented. The algorithm is based upon the joint maximum likelihood estimate of the signal and noise power. The discrete-time series are the sufficient statistics obtained after matched filtering of a biphase modulated signal in additive white Gaussian noise, before maximum likelihood decoding is performed.
Mikula, A L; Hetzel, S J; Binkley, N; Anderson, P A
2017-05-01
Many osteoporosis-related vertebral fractures are unappreciated but their detection is important as their presence increases future fracture risk. We found height loss is a useful tool in detecting patients with vertebral fractures, low bone mineral density, and vitamin D deficiency which may lead to improvements in patient care. This study aimed to determine if/how height loss can be used to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency. A hospital database search in which four patient groups including those with a diagnosis of osteoporosis-related vertebral fracture, osteoporosis, osteopenia, or vitamin D deficiency and a control group were evaluated for chart-documented height loss over an average 3 1/2 to 4-year time period. Data was retrieved from 66,021 patients (25,792 men and 40,229 women). A height loss of 1, 2, 3, and 4 cm had a sensitivity of 42, 32, 19, and 14% in detecting vertebral fractures, respectively. Positive likelihood ratios for detecting vertebral fractures were 1.73, 2.35, and 2.89 at 2, 3, and 4 cm of height loss, respectively. Height loss had lower sensitivities and positive likelihood ratios for detecting low bone mineral density and vitamin D deficiency compared to vertebral fractures. Specificity of 1, 2, 3, and 4 cm of height loss was 70, 82, 92, and 95%, respectively. The odds ratios for a patient who loses 1 cm of height being in one of the four diagnostic groups compared to a patient who loses no height was higher for younger and male patients. This study demonstrated that prospective height loss is an effective tool to identify patients with vertebral fractures, low bone mineral density, and vitamin D deficiency although a lack of height loss does not rule out these diagnoses. If significant height loss is present, the high positive likelihood ratios support a further workup.
Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai
2016-02-01
The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.
ERIC Educational Resources Information Center
Levy, Roy
2010-01-01
SEMModComp, a software package for conducting likelihood ratio tests for mean and covariance structure modeling is described. The package is written in R and freely available for download or on request.
Validation of software for calculating the likelihood ratio for parentage and kinship.
Drábek, J
2009-03-01
Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Validation of DNA-based identification software by computation of pedigree likelihood ratios.
Slooten, K
2011-08-01
Disaster victim identification (DVI) can be aided by DNA-evidence, by comparing the DNA-profiles of unidentified individuals with those of surviving relatives. The DNA-evidence is used optimally when such a comparison is done by calculating the appropriate likelihood ratios. Though conceptually simple, the calculations can be quite involved, especially with large pedigrees, precise mutation models etc. In this article we describe a series of test cases designed to check if software designed to calculate such likelihood ratios computes them correctly. The cases include both simple and more complicated pedigrees, among which inbred ones. We show how to calculate the likelihood ratio numerically and algebraically, including a general mutation model and possibility of allelic dropout. In Appendix A we show how to derive such algebraic expressions mathematically. We have set up these cases to validate new software, called Bonaparte, which performs pedigree likelihood ratio calculations in a DVI context. Bonaparte has been developed by SNN Nijmegen (The Netherlands) for the Netherlands Forensic Institute (NFI). It is available free of charge for non-commercial purposes (see www.dnadvi.nl for details). Commercial licenses can also be obtained. The software uses Bayesian networks and the junction tree algorithm to perform its calculations. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Chen, Helen; Bautista, Dianne; Ch'ng, Ying Chia; Li, Wenyun; Chan, Edwin; Rush, A John
2013-06-01
The Edinburgh Postnatal Depression Scale (EPDS) may not be a uniformly valid postnatal depression (PND) screen across populations. We evaluated the performance of a Chinese translation of 10-item (HK-EPDS) and six-item (HK-EPDS-6) versions in post-partum women in Singapore. Chinese-speaking post-partum obstetric clinic patients were recruited for this study. They completed the HK-EPDS, from which we derived the six-item HK-EPDS-6. All women were clinically assessed for PND based on Diagnostic and Statistical Manual, Fourth Edition-Text Revision criteria. Receiver-operator curve (ROC) analyses and likelihood ratio computations informed scale cutoff choices. Clinical fitness was judged by thresholds for internal consistency [α ≥ 0.70] and for diagnostic performance by true-positive rate (>85%), false-positive rate (≤10%), positive likelihood ratio (>1), negative likelihood ratio (<0.2), area under the ROC curve (AUC, ≥90%) and effect size (≥0.80). Based on clinical interview, prevalence of PND was 6.2% in 487 post-partum women. HK-EPDS internal consistency was 0.84. At 13 or more cutoff, the true-positive rate was 86.7%, false-positive rate 3.3%, positive likelihood ratio 26.4, negative likelihood ratio 0.14, AUC 94.4% and effect size 0.81. For the HK-EPDS-6, internal consistency was 0.76. At 8 or more cutoff, we found a true-positive rate of 86.7%, false-positive rate 6.6%, positive likelihood ratio 13.2, negative likelihood ration 0.14, AUC 92.9% and effect size 0.98. The HK-EPDS (cutoff ≥13) and HK-EPDS6 (cutoff ≥8) are fit for PND screening for general population post-partum women. The brief six-item version appears to be clinically suitable for quick screening in Chinese speaking women. Copyright © 2013 Wiley Publishing Asia Pty Ltd.
Contagion in Mass Killings and School Shootings.
Towers, Sherry; Gomez-Lievano, Andres; Khan, Maryam; Mubayi, Anuj; Castillo-Chavez, Carlos
2015-01-01
Several past studies have found that media reports of suicides and homicides appear to subsequently increase the incidence of similar events in the community, apparently due to the coverage planting the seeds of ideation in at-risk individuals to commit similar acts. Here we explore whether or not contagion is evident in more high-profile incidents, such as school shootings and mass killings (incidents with four or more people killed). We fit a contagion model to recent data sets related to such incidents in the US, with terms that take into account the fact that a school shooting or mass murder may temporarily increase the probability of a similar event in the immediate future, by assuming an exponential decay in contagiousness after an event. We find significant evidence that mass killings involving firearms are incented by similar events in the immediate past. On average, this temporary increase in probability lasts 13 days, and each incident incites at least 0.30 new incidents (p = 0.0015). We also find significant evidence of contagion in school shootings, for which an incident is contagious for an average of 13 days, and incites an average of at least 0.22 new incidents (p = 0.0001). All p-values are assessed based on a likelihood ratio test comparing the likelihood of a contagion model to that of a null model with no contagion. On average, mass killings involving firearms occur approximately every two weeks in the US, while school shootings occur on average monthly. We find that state prevalence of firearm ownership is significantly associated with the state incidence of mass killings with firearms, school shootings, and mass shootings.
Contagion in Mass Killings and School Shootings
Towers, Sherry; Gomez-Lievano, Andres; Khan, Maryam; Mubayi, Anuj; Castillo-Chavez, Carlos
2015-01-01
Background Several past studies have found that media reports of suicides and homicides appear to subsequently increase the incidence of similar events in the community, apparently due to the coverage planting the seeds of ideation in at-risk individuals to commit similar acts. Methods Here we explore whether or not contagion is evident in more high-profile incidents, such as school shootings and mass killings (incidents with four or more people killed). We fit a contagion model to recent data sets related to such incidents in the US, with terms that take into account the fact that a school shooting or mass murder may temporarily increase the probability of a similar event in the immediate future, by assuming an exponential decay in contagiousness after an event. Conclusions We find significant evidence that mass killings involving firearms are incented by similar events in the immediate past. On average, this temporary increase in probability lasts 13 days, and each incident incites at least 0.30 new incidents (p = 0.0015). We also find significant evidence of contagion in school shootings, for which an incident is contagious for an average of 13 days, and incites an average of at least 0.22 new incidents (p = 0.0001). All p-values are assessed based on a likelihood ratio test comparing the likelihood of a contagion model to that of a null model with no contagion. On average, mass killings involving firearms occur approximately every two weeks in the US, while school shootings occur on average monthly. We find that state prevalence of firearm ownership is significantly associated with the state incidence of mass killings with firearms, school shootings, and mass shootings. PMID:26135941
Masch, William R; Cohan, Richard H; Ellis, James H; Dillman, Jonathan R; Rubin, Jonathan M; Davenport, Matthew S
2016-02-01
The purpose of this study was to determine the clinical effectiveness of prospectively reported sonographic twinkling artifact for the diagnosis of renal calculus in patients without known urolithiasis. All ultrasound reports finalized in one health system from June 15, 2011, to June 14, 2014, that contained the words "twinkle" or "twinkling" in reference to suspected renal calculus were identified. Patients with known urolithiasis or lack of a suitable reference standard (unenhanced abdominal CT with ≤ 2.5-mm slice thickness performed ≤ 30 days after ultrasound) were excluded. The sensitivity, specificity, and positive likelihood ratio of sonographic twinkling artifact for the diagnosis of renal calculus were calculated by renal unit and stratified by two additional diagnostic features for calcification (echogenic focus, posterior acoustic shadowing). Eighty-five patients formed the study population. Isolated sonographic twinkling artifact had sensitivity of 0.78 (82/105), specificity of 0.40 (26/65), and a positive likelihood ratio of 1.30 for the diagnosis of renal calculus. Specificity and positive likelihood ratio improved and sensitivity declined when the following additional diagnostic features were present: sonographic twinkling artifact and echogenic focus (sensitivity, 0.61 [64/105]; specificity, 0.65 [42/65]; positive likelihood ratio, 1.72); sonographic twinkling artifact and posterior acoustic shadowing (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81); all three features (sensitivity, 0.31 [33/105]; specificity, 0.95 [62/65]; positive likelihood ratio, 6.81). Isolated sonographic twinkling artifact has a high false-positive rate (60%) for the diagnosis of renal calculus in patients without known urolithiasis.
Van Hoeyveld, Erna; Nickmans, Silvie; Ceuppens, Jan L; Bossuyt, Xavier
2015-10-23
Cut-off values and predictive values are used for the clinical interpretation of specific IgE antibody results. However, cut-off levels are not well defined, and predictive values are dependent on the prevalence of disease. The objective of this study was to document clinically relevant diagnostic accuracy of specific IgE for inhalant allergens (grass pollen and birch pollen) based on test result interval-specific likelihood ratios. Likelihood ratios are independent of the prevalence and allow to provide diagnostic accuracy information for test result intervals. In a prospective study we included consecutive adult patients presenting at an allergy clinic with complaints of rhinitis or rhinoconjunctivitis. The standard for diagnosis was a suggestive clinical history of grass or birch pollen allergy and a positive skin test. Specific IgE was determined with the ImmunoCAP Fluorescence Enzyme Immuno-Assay. We established specific IgE test result interval related likelihood ratios for clinical allergy to inhalant allergens (grass pollen, rPhl p 1,5, birch pollen, rBet v 1). The likelihood ratios for allergy increased with increasing specific IgE antibody levels. The likelihood ratio was <0.03 for specific IgE <0.1 kU/L, between 0.1 and 1.4 for specific IgE between 0.1 kU/L and 0.35 kU/L, between 1.4 and 4.2 for specific IgE between 0.35 kU/L and 3.5 kU/L, >6.3 for specific IgE>0.7, and very high (∞) for specific IgE >3.5 kU/L. Test result interval specific likelihood ratios provide a useful tool for the interpretation of specific IgE test results for inhalant allergens. Copyright © 2015 Elsevier B.V. All rights reserved.
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
Prolonged Operative Duration Increases Risk of Surgical Site Infections: A Systematic Review
Chen, Brian Po-Han; Soleas, Ireena M.; Ferko, Nicole C.; Cameron, Chris G.; Hinoul, Piet
2017-01-01
Abstract Background: The incidence of surgical site infection (SSI) across surgical procedures, specialties, and conditions is reported to vary from 0.1% to 50%. Operative duration is often cited as an independent and potentially modifiable risk factor for SSI. The objective of this systematic review was to provide an in-depth understanding of the relation between operating time and SSI. Patients and Methods: This review included 81 prospective and retrospective studies. Along with study design, likelihood of SSI, mean operative times, time thresholds, effect measures, confidence intervals, and p values were extracted. Three meta-analyses were conducted, whereby odds ratios were pooled by hourly operative time thresholds, increments of increasing operative time, and surgical specialty. Results: Pooled analyses demonstrated that the association between extended operative time and SSI typically remained statistically significant, with close to twice the likelihood of SSI observed across various time thresholds. The likelihood of SSI increased with increasing time increments; for example, a 13%, 17%, and 37% increased likelihood for every 15 min, 30 min, and 60 min of surgery, respectively. On average, across various procedures, the mean operative time was approximately 30 min longer in patients with SSIs compared with those patients without. Conclusions: Prolonged operative time can increase the risk of SSI. Given the importance of SSIs on patient outcomes and health care economics, hospitals should focus efforts to reduce operative time. PMID:28832271
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
A quantum framework for likelihood ratios
NASA Astrophysics Data System (ADS)
Bond, Rachael L.; He, Yang-Hui; Ormerod, Thomas C.
The ability to calculate precise likelihood ratios is fundamental to science, from Quantum Information Theory through to Quantum State Estimation. However, there is no assumption-free statistical methodology to achieve this. For instance, in the absence of data relating to covariate overlap, the widely used Bayes’ theorem either defaults to the marginal probability driven “naive Bayes’ classifier”, or requires the use of compensatory expectation-maximization techniques. This paper takes an information-theoretic approach in developing a new statistical formula for the calculation of likelihood ratios based on the principles of quantum entanglement, and demonstrates that Bayes’ theorem is a special case of a more general quantum mechanical expression.
Likelihood ratio decisions in memory: three implied regularities.
Glanzer, Murray; Hilford, Andrew; Maloney, Laurence T
2009-06-01
We analyze four general signal detection models for recognition memory that differ in their distributional assumptions. Our analyses show that a basic assumption of signal detection theory, the likelihood ratio decision axis, implies three regularities in recognition memory: (1) the mirror effect, (2) the variance effect, and (3) the z-ROC length effect. For each model, we present the equations that produce the three regularities and show, in computed examples, how they do so. We then show that the regularities appear in data from a range of recognition studies. The analyses and data in our study support the following generalization: Individuals make efficient recognition decisions on the basis of likelihood ratios.
Mwanza, Jean-Claude; Budenz, Donald L; Godfrey, David G; Neelakantan, Arvind; Sayyad, Fouad E; Chang, Robert T; Lee, Richard K
2014-04-01
To evaluate the glaucoma diagnostic performance of ganglion cell inner-plexiform layer (GCIPL) parameters used individually and in combination with retinal nerve fiber layer (RNFL) or optic nerve head (ONH) parameters measured with Cirrus HD-OCT (Carl Zeiss Meditec, Inc, Dublin, CA). Prospective cross-sectional study. Fifty patients with early perimetric glaucoma and 49 age-matched healthy subjects. Three peripapillary RNFL and 3 macular GCIPL scans were obtained in 1 eye of each participant. A patient was considered glaucomatous if at least 2 of the 3 RNFL or GCIPL scans had the average or at least 1 sector measurement flagged at 1% to 5% or less than 1%. The diagnostic performance was determined for each GCIPL, RNFL, and ONH parameter as well as for binary or-logic and and-logic combinations of GCIPL with RNFL or ONH parameters. Sensitivity, specificity, positive likelihood ratio (PLR), and negative likelihood ratio (NLR). Among GCIPL parameters, the minimum had the best diagnostic performance (sensitivity, 82.0%; specificity, 87.8%; PLR, 6.69; and NLR, 0.21). Inferior quadrant was the best RNFL parameter (sensitivity, 74%; specificity, 95.9%; PLR, 18.13; and NLR, 0.27), as was rim area (sensitivity, 68%; specificity, 98%; PLR, 33.3; and NLR, 0.33) among ONH parameters. The or-logic combination of minimum GCIPL and average RNFL provided the overall best diagnostic performance (sensitivity, 94%; specificity, 85.7%; PRL, 6.58; and NLR, 0.07) as compared with the best RNFL, best ONH, and best and-logic combination (minimum GCIPL and inferior quadrant RNFL; sensitivity, 64%; specificity, 100%; PLR, infinity; and NPR, 0.36). The binary or-logic combination of minimum GCIPL and average RNFL or rim area provides better diagnostic performances than those of and-logic combinations or best single GCIPL, RNFL, or ONH parameters. This finding may be clinically valuable for the diagnosis of early glaucoma. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Likelihood-Ratio DIF Testing: Effects of Nonnormality
ERIC Educational Resources Information Center
Woods, Carol M.
2008-01-01
Differential item functioning (DIF) occurs when an item has different measurement properties for members of one group versus another. Likelihood-ratio (LR) tests for DIF based on item response theory (IRT) involve statistically comparing IRT models that vary with respect to their constraints. A simulation study evaluated how violation of the…
David, Ingrid; Bouvier, Frédéric; Ricard, Edmond; Ruesche, Julien; Weisbecker, Jean-Louis
2013-09-30
The pre-weaning growth of lambs, an important component of meat production, depends on maternal and direct effects. These effects cannot be observed directly and models used to study pre-weaning growth assume that they are additive. However, it is reasonable to suggest that the influence of direct effects on growth may differ depending on the value of maternal effects i.e. an interaction may exist between the two components. To test this hypothesis, an experiment was carried out in Romane sheep in order to obtain observations of maternal phenotypic effects (milk yield and milk quality) and pre-weaning growth of the lambs. The experiment consisted of mating ewes that had markedly different maternal genetic effects with rams that contributed very different genetic effects in four replicates of a 3 × 2 factorial plan. Milk yield was measured using the lamb suckling weight differential technique and milk composition (fat and protein contents) was determined by infrared spectroscopy at 15, 21 and 35 days after lambing. Lambs were weighed at birth and then at 15, 21 and 35 days. An interaction between genotype (of the lamb) and environment (milk yield and quality) for average daily gain was tested using a restricted likelihood ratio test, comparing a linear reaction norm model (interaction model) to a classical additive model (no interaction model). A total of 1284 weights of 442 lambs born from 166 different ewes were analysed. On average, the ewes produced 2.3 ± 0.8 L milk per day. The average protein and fat contents were 50 ± 4 g/L and 60 ± 18 g/L, respectively. The mean 0-35 day average daily gain was 207 ± 46 g/d. Results of the restricted likelihood ratio tests did not highlight any significant interactions between the genotype of the lambs and milk production of the ewe. Our results support the hypothesis of additivity of maternal and direct effects on growth that is currently applied in genetic evaluation models.
Li, Zhanzhan; Zhou, Qin; Li, Yanyan; Yan, Shipeng; Fu, Jun; Huang, Xinqiong; Shen, Liangfang
2017-02-28
We conducted a meta-analysis to evaluate the diagnostic values of mean cerebral blood volume for recurrent and radiation injury in glioma patients. We performed systematic electronic searches for eligible study up to August 8, 2016. Bivariate mixed effects models were used to estimate the combined sensitivity, specificity, positive likelihood ratios, negative likelihood ratios, diagnostic odds ratios and their 95% confidence intervals (CIs). Fifteen studies with a total number of 576 participants were enrolled. The pooled sensitivity and specificity of diagnostic were 0.88 (95%CI: 0.82-0.92) and 0.85 (95%CI: 0.68-0.93). The pooled positive likelihood ratio is 5.73 (95%CI: 2.56-12.81), negative likelihood ratio is 0.15 (95%CI: 0.10-0.22), and the diagnostic odds ratio is 39.34 (95%CI:13.96-110.84). The summary receiver operator characteristic is 0.91 (95%CI: 0.88-0.93). However, the Deek's plot suggested publication bias may exist (t=2.30, P=0.039). Mean cerebral blood volume measurement methods seems to be very sensitive and highly specific to differentiate recurrent and radiation injury in glioma patients. The results should be interpreted with caution because of the potential bias.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Ran, Li; Zhao, Wenli; Zhao, Ye; Bu, Huaien
2017-07-01
Contrast-enhanced ultrasound (CEUS) is considered a novel method for diagnosing pancreatic cancer, but currently, there is no conclusive evidence of its accuracy. Using CEUS in discriminating between pancreatic carcinoma and other pancreatic lesions, we aimed to evaluate the diagnostic accuracy of CEUS in predicting pancreatic tumours. Relevant studies were selected from the PubMed, Cochrane Library, Elsevier, CNKI, VIP, and WANFANG databases dating from January 2006 to May 2017. The following terms were used as keywords: "pancreatic cancer" OR "pancreatic carcinoma," "contrast-enhanced ultrasonography" OR "contrast-enhanced ultrasound" OR "CEUS," and "diagnosis." The selection criteria are as follows: pancreatic carcinomas diagnosed by CEUS while the main reference standard was surgical pathology or biopsy (if it involved a clinical diagnosis, particular criteria emphasized); SonoVue or Levovist was the contrast agent; true positive, false positive, false negative, and true negative rates were obtained or calculated to construct the 2 × 2 contingency table; English or Chinese articles; at least 20 patients were enrolled in each group. The Quality Assessment for Studies of Diagnostic Accuracy was employed to evaluate the quality of articles. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, summary receiver-operating characteristic curves, and the area under curve were evaluated to estimate the overall diagnostic efficiency. Pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated with fixed-effect models. Eight of 184 records were eligible for a meta-analysis after independent scrutinization by 2 reviewers. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were 0.86 (95% CI 0.81-0.90), 0.75 (95% CI 0.68-0.82), 3.56 (95% CI 2.64-4.78), 0.19 (95% CI 0.13-0.27), and 22.260 (95% CI 8.980-55.177), respectively. The area under the SROC curve was 0.9088. CEUS has a satisfying pooled sensitivity and specificity for discriminating pancreatic cancer from other pancreatic lesions.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-11-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-01-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182
Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures
ERIC Educational Resources Information Center
Atar, Burcu; Kamata, Akihito
2011-01-01
The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…
Understanding the properties of diagnostic tests - Part 2: Likelihood ratios.
Ranganathan, Priya; Aggarwal, Rakesh
2018-01-01
Diagnostic tests are used to identify subjects with and without disease. In a previous article in this series, we examined some attributes of diagnostic tests - sensitivity, specificity, and predictive values. In this second article, we look at likelihood ratios, which are useful for the interpretation of diagnostic test results in everyday clinical practice.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Apa, Hurşit; Gözmen, Salih; Bayram, Nuri; Çatkoğlu, Asl; Devrim, Fatma; Karaarslan, Utku; Günay, İlker; Ünal, Nurettin; Devrim, İlker
2013-09-01
The aim of this study was to compare the body temperature measurements of infrared tympanic and forehead noncontact thermometers with the axillary digital thermometer. Randomly selected 50 pediatric patients who were hospitalized in Dr Behcet Uz Children's Training and Research Hospital, Pediatric Infectious Disease Unit, between March 2012 and September 2012 were included in the study. Body temperature measurements were performed using an axillary thermometer (Microlife MT 3001), a tympanic thermometer (Microlife Ear Thermometer IR 100), and a noncontact thermometer (ThermoFlash LX-26). Fifty patients participated in this study. We performed 1639 temperature readings for every method. The average difference between the mean (SD) of both axillary and tympanic temperatures was -0.20°C (0.61°C) (95% confidence interval, -1.41°C to 1.00°C). The average difference between the mean (SD) of both axillary and forehead temperatures was -0.38 (0.55°C) (95% confidence interval, -1.47°C to 0.70°C). The Bland-Altman plot showed that most of the data points were tightly clustered around the zero line of the difference between the 2 temperature readings. With the use of the axillary method as the criterion standard, positive likelihood ratios were 17.9 and 16.5 and negative likelihood ratios were 0.2 and 0.4 for tympanic and forehead measurements, respectively. The results demonstrated that the infrared tympanic thermometer could be a good option in the measurement of fever in the pediatric population. The noncontact infrared thermometer is very useful for the screening of fever in the pediatric population, but it must be used with caution because it has a high value of bias.
Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.
Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter
2005-05-17
Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
van Werkhoven, J M; Gaemperli, O; Schuijf, J D; Jukema, J W; Kroft, L J; Leschka, S; Alkadhi, H; Valenta, I; Pundziute, G; de Roos, A; van der Wall, E E; Kaufmann, P A; Bax, J J
2009-10-01
To assess whether multislice computed tomography coronary angiography (MSCTA) may be useful for risk stratification of patients with suspected coronary artery disease (CAD) at intermediate pretest likelihood according to Diamond and Forrester. MSCTA images were evaluated for the presence of significant CAD in 316 patients with suspected CAD (60% male, average (SD) age 57 (11) years) and an intermediate pretest likelihood according to Diamond and Forrester. Patients were followed up to determine the occurrence of an event. A combined end point of all-cause mortality, non-fatal infarction and unstable angina requiring revascularisation. Significant CAD was seen in 89 patients (28%), whereas normal MSCTA or non-significant CAD was seen in the remaining 227 (72%) patients. During follow-up (median 621 days (25-75th centile 408-835) an event occurred in 13 patients (4.8%). The annualised event rate was 0.8% in patients with normal MSCT, 2.2% in patients with non-significant CAD and 6.5% in patients with significant CAD. Moreover, MSCTA remained a significant predictor (p<0.05) of events after multivariate correction (hazard ratio = 3.460 (95% CI 1.142 to 10.480). The results suggest that in patients with an intermediate pretest likelihood, MSCTA is highly effective in re-stratifying patients into either a low or high post-test risk group. These results further emphasise the usefulness of non-invasive imaging with MSCTA in this patient population.
On the Likelihood Ratio Test for the Number of Factors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Hayashi, Kentaro; Bentler, Peter M.; Yuan, Ke-Hai
2007-01-01
In the exploratory factor analysis, when the number of factors exceeds the true number of factors, the likelihood ratio test statistic no longer follows the chi-square distribution due to a problem of rank deficiency and nonidentifiability of model parameters. As a result, decisions regarding the number of factors may be incorrect. Several…
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…
Detecting Growth Shape Misspecifications in Latent Growth Models: An Evaluation of Fit Indexes
ERIC Educational Resources Information Center
Leite, Walter L.; Stapleton, Laura M.
2011-01-01
In this study, the authors compared the likelihood ratio test and fit indexes for detection of misspecifications of growth shape in latent growth models through a simulation study and a graphical analysis. They found that the likelihood ratio test, MFI, and root mean square error of approximation performed best for detecting model misspecification…
[Waist-to-height ratio is an indicator of metabolic risk in children].
Valle-Leal, Jaime; Abundis-Castro, Leticia; Hernández-Escareño, Juan; Flores-Rubio, Salvador
2016-01-01
Abdominal fat, particularly visceral, is associated with a high risk of metabolic complications. The waist-height ratio (WHtR) is used to assess abdominal fat in individuals of all ages. To determine the ability of the waist-to-height ratio to detect metabolic risk in mexican schoolchildren. A study was conducted on children between 6 and 12 years. Obesity was diagnosed as a body mass index (BMI) ≥ 85th percentile, and an ICE ≥0.5 was considered abdominal obesity. Blood levels of glucose, cholesterol and triglycerides were measured. The sensitivity, specificity, positive predictive and negative value, area under curve, the positive likelihood ratio and negative likelihood ratio of the WHtR and BMI were calculated in order to identify metabolic alterations. WHtR and BMI were compared to determine which had the best diagnostic efficiency. Of the 223 children included in the study, 51 had hypertriglyceridaemia, 27 with hypercholesterolaemia, and 9 with hyperglycaemia. On comparing the diagnostic efficiency of WHtR with that of BMI, there was a sensitivity of 100% vs. 56% for hyperglycaemia, 93 vs. 70% for cholesterol, and 76 vs. 59% for hypertriglyceridaemia. The specificity, negative predictive value, positive predictive value, positive likelihood ratio, negative likelihood ratio, and area under curve were also higher for WHtR. The WHtR is a more efficient indicator than BMI in identifying metabolic risk in mexican school-age. Copyright © 2015 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, X.
2015-12-01
A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.
ERIC Educational Resources Information Center
Yuan, Ke-Hai
2008-01-01
In the literature of mean and covariance structure analysis, noncentral chi-square distribution is commonly used to describe the behavior of the likelihood ratio (LR) statistic under alternative hypothesis. Due to the inaccessibility of the rather technical literature for the distribution of the LR statistic, it is widely believed that the…
Namkaew, Montakarn; Wiwatanadate, Phongtape
2012-09-01
To assess the dose response of fluoride exposure from water and chronic pain. Using a retrospective cohort design, the study was conducted in two sub-districts of San Kamphaeng district, Poo-kha and On-tai. Five hundred and thirty-four residents aged ≥50 years of age were interviewed about their sources of drinking water and assessed for chronic pain. Each water source was sampled for fluoride measurement, from which the average daily fluoride dose was estimated. Binary logistic regression with forward stepwise (likelihood ratio) model selection technique was used to examine the association between the average daily fluoride dose and chronic pain. We found associations between the average daily fluoride dose and lower back pain [odds ratio (OR) = 5.12; 95% confidence interval (CI), 1.59-16.98], and between the high fluoride area vs. the low fluoride area (OR = 1.58; 95% CI, 1.10-2.28; relative risk= 1.22 with 95% CI, 1.14-1.31) to lower back pain. Other risk factors, such as family history of body pain and a history of injury of the lower body, were also associated with lower back pain. However, there were no relationships between the average daily fluoride dose and leg and knee pains. To prevent further lower back pain, we recommend that the water in this area be treated to reduce its fluoride content. © 2012 Blackwell Publishing Ltd.
Spread of risk across financial markets: better to invest in the peripheries
NASA Astrophysics Data System (ADS)
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-04-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network.
Spread of risk across financial markets: better to invest in the peripheries
Pozzi, F.; Di Matteo, T.; Aste, T.
2013-01-01
Risk is not uniformly spread across financial markets and this fact can be exploited to reduce investment risk contributing to improve global financial stability. We discuss how, by extracting the dependency structure of financial equities, a network approach can be used to build a well-diversified portfolio that effectively reduces investment risk. We find that investments in stocks that occupy peripheral, poorly connected regions in financial filtered networks, namely Minimum Spanning Trees and Planar Maximally Filtered Graphs, are most successful in diversifying, improving the ratio between returns' average and standard deviation, reducing the likelihood of negative returns, while keeping profits in line with the general market average even for small baskets of stocks. On the contrary, investments in subsets of central, highly connected stocks are characterized by greater risk and worse performance. This methodology has the added advantage of visualizing portfolio choices directly over the graphic layout of the network. PMID:23588852
Yin, Wesley; Horblyuk, Ruslan; Perkins, Julia Jane; Sison, Steve; Smith, Greg; Snider, Julia Thornton; Wu, Yanyu; Philipson, Tomas J
2017-02-01
Determine workplace productivity losses attributable to breast cancer progression. Longitudinal analysis linking 2005 to 2012 medical and pharmacy claims and workplace absence data in the US patients were commercially insured women aged 18 to 64 diagnosed with breast cancer. Productivity was measured as employment status and total quarterly workplace hours missed, and valued using average US wages. Six thousand four hundred and nine women were included. Breast cancer progression was associated with a lower probability of employment (hazard ratio [HR] = 0.65, P < 0.01) and increased workplace hours missed. The annual value of missed work was $24,166 for non-metastatic and $30,666 for metastatic patients. Thus, progression to metastatic disease is associated with an additional $6500 in lost work time (P < 0.05), or 14% of average US wages. Breast cancer progression leads to diminished likelihood of employment, increased workplace hours missed, and increased cost burden.
Urabe, Naohisa; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-01-01
ABSTRACT We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. PMID:28330887
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grana, Justin; Wolpert, David; Neil, Joshua
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks
Grana, Justin; Wolpert, David; Neil, Joshua; ...
2016-03-11
The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
Horsch, Karla; Pesce, Lorenzo L.; Giger, Maryellen L.; Metz, Charles E.; Jiang, Yulei
2012-01-01
Purpose: The authors developed scaling methods that monotonically transform the output of one classifier to the “scale” of another. Such transformations affect the distribution of classifier output while leaving the ROC curve unchanged. In particular, they investigated transformations between radiologists and computer classifiers, with the goal of addressing the problem of comparing and interpreting case-specific values of output from two classifiers. Methods: Using both simulated and radiologists’ rating data of breast imaging cases, the authors investigated a likelihood-ratio-scaling transformation, based on “matching” classifier likelihood ratios. For comparison, three other scaling transformations were investigated that were based on matching classifier true positive fraction, false positive fraction, or cumulative distribution function, respectively. The authors explored modifying the computer output to reflect the scale of the radiologist, as well as modifying the radiologist’s ratings to reflect the scale of the computer. They also evaluated how dataset size affects the transformations. Results: When ROC curves of two classifiers differed substantially, the four transformations were found to be quite different. The likelihood-ratio scaling transformation was found to vary widely from radiologist to radiologist. Similar results were found for the other transformations. Our simulations explored the effect of database sizes on the accuracy of the estimation of our scaling transformations. Conclusions: The likelihood-ratio-scaling transformation that the authors have developed and evaluated was shown to be capable of transforming computer and radiologist outputs to a common scale reliably, thereby allowing the comparison of the computer and radiologist outputs on the basis of a clinically relevant statistic. PMID:22559651
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
Urabe, Naohisa; Sakamoto, Susumu; Sano, Go; Suzuki, Junko; Hebisawa, Akira; Nakamura, Yasuhiko; Koyama, Kazuya; Ishii, Yoshikazu; Tateda, Kazuhiro; Homma, Sakae
2017-06-01
We evaluated the usefulness of an Aspergillus galactomannan (GM) test, a β-d-glucan (βDG) test, and two different Aspergillus PCR assays of bronchoalveolar lavage fluid (BALF) samples for the diagnosis of chronic pulmonary aspergillosis (CPA). BALF samples from 30 patients with and 120 patients without CPA were collected. We calculated the sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio for each test individually and in combination with other tests. The optical density index values, as determined by receiver operating characteristic analysis, for the diagnosis of CPA were 0.5 and 100 for GM and βDG testing of BALF, respectively. The sensitivity and specificity of the GM test, βDG test, and PCR assays 1 and 2 were 77.8% and 90.0%, 77.8% and 72.5%, 86.7% and 84.2%, and 66.7% and 94.2%, respectively. A comparison of the PCR assays showed that PCR assay 1 had a better sensitivity, a better negative predictive value, and a better negative likelihood ratio and PCR assay 2 had a better specificity, a better positive predictive value, and a better positive likelihood ratio. The combination of the GM and βDG tests had the highest diagnostic odds ratio. The combination of the GM and βDG tests on BALF was more useful than any single test for diagnosing CPA. Copyright © 2017 American Society for Microbiology.
An improved push-pull voltage fed converter using a tapped output-filter inductor
NASA Technical Reports Server (NTRS)
Wester, G. W.
1983-01-01
A new concept of using a tapped output-filter inductor and an auxiliary commutating diode to reduce the likelihood of transformer core saturation in a push-pull, voltage-fed converter is presented. The linearized circuit model and transfer functions are derived with a hybrid approach using both state-space and circuit averaging. Operation of the new converter - including parasitic effects - is discussed, and a design equation for inductor tap ratio is established. It is predicted and experimentally confirmed that the new converter has more symmetrical transformer core operation, and the potential exits for lower transistor turnon current and reduced transistor voltage stress. These benefits reduce switching loss and enhance transistor reliability.
NASA Astrophysics Data System (ADS)
Pan, Zhen; Anderes, Ethan; Knox, Lloyd
2018-05-01
One of the major targets for next-generation cosmic microwave background (CMB) experiments is the detection of the primordial B-mode signal. Planning is under way for Stage-IV experiments that are projected to have instrumental noise small enough to make lensing and foregrounds the dominant source of uncertainty for estimating the tensor-to-scalar ratio r from polarization maps. This makes delensing a crucial part of future CMB polarization science. In this paper we present a likelihood method for estimating the tensor-to-scalar ratio r from CMB polarization observations, which combines the benefits of a full-scale likelihood approach with the tractability of the quadratic delensing technique. This method is a pixel space, all order likelihood analysis of the quadratic delensed B modes, and it essentially builds upon the quadratic delenser by taking into account all order lensing and pixel space anomalies. Its tractability relies on a crucial factorization of the pixel space covariance matrix of the polarization observations which allows one to compute the full Gaussian approximate likelihood profile, as a function of r , at the same computational cost of a single likelihood evaluation.
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Harrell-Williams, Leigh; Wolfe, Edward W
2014-01-01
Previous research has investigated the influence of sample size, model misspecification, test length, ability distribution offset, and generating model on the likelihood ratio difference test in applications of item response models. This study extended that research to the evaluation of dimensionality using the multidimensional random coefficients multinomial logit model (MRCMLM). Logistic regression analysis of simulated data reveal that sample size and test length have a large effect on the capacity of the LR difference test to correctly identify unidimensionality, with shorter tests and smaller sample sizes leading to smaller Type I error rates. Higher levels of simulated misfit resulted in fewer incorrect decisions than data with no or little misfit. However, Type I error rates indicate that the likelihood ratio difference test is not suitable under any of the simulated conditions for evaluating dimensionality in applications of the MRCMLM.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Hone, Thomas; Habicht, Jarno; Domente, Silviu; Atun, Rifat
2016-01-01
Background Moldova is the poorest country in Europe. Economic constraints mean that Moldova faces challenges in protecting individuals from excessive costs, improving population health and securing health system sustainability. The Moldovan government has introduced a state benefit package and expanded health insurance coverage to reduce the burden of health care costs for citizens. This study examines the effects of expanded health insurance by examining factors associated with health insurance coverage, likelihood of incurring out–of–pocket (OOP) payments for medicines or services, and the likelihood of forgoing health care when unwell. Methods Using publically available databases and the annual Moldova Household Budgetary Survey, we examine trends in health system financing, health care utilization, health insurance coverage, and costs incurred by individuals for the years 2006–2012. We perform logistic regression to assess the likelihood of having health insurance, incurring a cost for health care, and forgoing health care when ill, controlling for socio–economic and demographic covariates. Findings Private expenditure accounted for 55.5% of total health expenditures in 2012. 83.2% of private health expenditures is OOP payments–especially for medicines. Healthcare utilization is in line with EU averages of 6.93 outpatient visits per person. Being uninsured is associated with groups of those aged 25–49 years, the self–employed, unpaid family workers, and the unemployed, although we find lower likelihood of being uninsured for some of these groups over time. Over time, the likelihood of OOP for medicines increased (odds ratio OR = 1.422 in 2012 compared to 2006), but fell for health care services (OR = 0.873 in 2012 compared to 2006). No insurance and being older and male, was associated with increased likelihood of forgoing health care when sick, but we found the likelihood of forgoing health care to be increasing over time (OR = 1.295 in 2012 compared to 2009). Conclusions Moldova has achieved improvements in health insurance coverage with reductions in OOP for services, which are modest but are eroded by increasing likelihood of OOP for medicines. Insurance coverage was an important determinant for health care costs incurred by patients and patients forgoing health care. Improvements notwithstanding, there is an unfinished agenda of attaining universal health coverage in Moldova to protect individuals from health care costs. PMID:27909581
Hone, Thomas; Habicht, Jarno; Domente, Silviu; Atun, Rifat
2016-12-01
Moldova is the poorest country in Europe. Economic constraints mean that Moldova faces challenges in protecting individuals from excessive costs, improving population health and securing health system sustainability. The Moldovan government has introduced a state benefit package and expanded health insurance coverage to reduce the burden of health care costs for citizens. This study examines the effects of expanded health insurance by examining factors associated with health insurance coverage, likelihood of incurring out-of-pocket (OOP) payments for medicines or services, and the likelihood of forgoing health care when unwell. Using publically available databases and the annual Moldova Household Budgetary Survey, we examine trends in health system financing, health care utilization, health insurance coverage, and costs incurred by individuals for the years 2006-2012. We perform logistic regression to assess the likelihood of having health insurance, incurring a cost for health care, and forgoing health care when ill, controlling for socio-economic and demographic covariates. Private expenditure accounted for 55.5% of total health expenditures in 2012. 83.2% of private health expenditures is OOP payments-especially for medicines. Healthcare utilization is in line with EU averages of 6.93 outpatient visits per person. Being uninsured is associated with groups of those aged 25-49 years, the self-employed, unpaid family workers, and the unemployed, although we find lower likelihood of being uninsured for some of these groups over time. Over time, the likelihood of OOP for medicines increased (odds ratio OR = 1.422 in 2012 compared to 2006), but fell for health care services (OR = 0.873 in 2012 compared to 2006). No insurance and being older and male, was associated with increased likelihood of forgoing health care when sick, but we found the likelihood of forgoing health care to be increasing over time (OR = 1.295 in 2012 compared to 2009). Moldova has achieved improvements in health insurance coverage with reductions in OOP for services, which are modest but are eroded by increasing likelihood of OOP for medicines. Insurance coverage was an important determinant for health care costs incurred by patients and patients forgoing health care. Improvements notwithstanding, there is an unfinished agenda of attaining universal health coverage in Moldova to protect individuals from health care costs.
Rock, Cheryl L.; Natarajan, Loki; Pu, Minya; Thomson, Cynthia A.; Flatt, Shirley W.; Caan, Bette J.; Gold, Ellen B.; Al-Delaimy, Wael K.; Newman, Vicky A.; Hajek, Richard A.; Stefanick, Marcia L.; Pierce, John P.
2009-01-01
In some cohort studies, a high-vegetable diet has been associated with greater likelihood of recurrence-free survival in women diagnosed with breast cancer. Carotenoids are obtained primarily from vegetables and fruit, and they exhibit biological activities that may specifically reduce the progression of mammary carcinogenesis. The present analysis examines the relationship between plasma carotenoids at enrollment and 1, 2 or 3, 4 and 6 years and breast cancer-free survival in the Women’s Healthy Eating and Living (WHEL) Study participants (n = 3043), who had been diagnosed with early stage breast cancer. The primary endpoint was time to a second breast cancer event (a recurrence or new primary breast cancer). An average carotenoid concentration over time was estimated for each participant as the average area under the plasma carotenoid curve (AUC) formed by the plasma carotenoid concentrations at scheduled clinic visits. Multiple regression Cox proportional hazards analysis with adjustment for prognostic and other factors was used to examine the association between carotenoids and breast cancer-free survival. A total of 508 (16.7%) breast cancer events occurred over a median 7.12 years follow-up. Compared to the lowest tertile, the hazard ratio for the medium/high plasma carotenoid tertiles was 0.67 (95% confidence interval 0.54–0.83) after adjustment. The interaction between study group and tertile of average carotenoid concentration over time was not significant (P = 0.23). Higher biological exposure to carotenoids, when assessed over the time frame of the study, was associated with greater likelihood of breast cancer-free survival regardless of study group assignment. PMID:19190138
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
Three regularities of recognition memory: the role of bias.
Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok
2015-12-01
A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.
Detection methods for non-Gaussian gravitational wave stochastic backgrounds
NASA Astrophysics Data System (ADS)
Drasco, Steve; Flanagan, Éanna É.
2003-04-01
A gravitational wave stochastic background can be produced by a collection of independent gravitational wave events. There are two classes of such backgrounds, one for which the ratio of the average time between events to the average duration of an event is small (i.e., many events are on at once), and one for which the ratio is large. In the first case the signal is continuous, sounds something like a constant hiss, and has a Gaussian probability distribution. In the second case, the discontinuous or intermittent signal sounds something like popcorn popping, and is described by a non-Gaussian probability distribution. In this paper we address the issue of finding an optimal detection method for such a non-Gaussian background. As a first step, we examine the idealized situation in which the event durations are short compared to the detector sampling time, so that the time structure of the events cannot be resolved, and we assume white, Gaussian noise in two collocated, aligned detectors. For this situation we derive an appropriate version of the maximum likelihood detection statistic. We compare the performance of this statistic to that of the standard cross-correlation statistic both analytically and with Monte Carlo simulations. In general the maximum likelihood statistic performs better than the cross-correlation statistic when the stochastic background is sufficiently non-Gaussian, resulting in a gain factor in the minimum gravitational-wave energy density necessary for detection. This gain factor ranges roughly between 1 and 3, depending on the duty cycle of the background, for realistic observing times and signal strengths for both ground and space based detectors. The computational cost of the statistic, although significantly greater than that of the cross-correlation statistic, is not unreasonable. Before the statistic can be used in practice with real detector data, further work is required to generalize our analysis to accommodate separated, misaligned detectors with realistic, colored, non-Gaussian noise.
TOO MANY MEN? SEX RATIOS AND WOMEN’S PARTNERING BEHAVIOR IN CHINA
Trent, Katherine; South, Scott J.
2011-01-01
The relative numbers of women and men are changing dramatically in China, but the consequences of these imbalanced sex ratios have received little attention. We merge data from the Chinese Health and Family Life Survey with community-level data from Chinese censuses to examine the relationship between cohort- and community-specific sex ratios and women’s partnering behavior. Consistent with demographic-opportunity theory and sociocultural theory, we find that high sex ratios (indicating more men relative to women) are associated with an increased likelihood that women marry before age 25. However, high sex ratios are also associated with an increased likelihood that women engage in premarital and extramarital sexual relationships and have had more than one sexual partner, findings consistent with demographic-opportunity theory but inconsistent with sociocultural theory. PMID:22199403
Potential epidemiological and economical impact of two rotavirus vaccines in Colombia.
De la Hoz, Fernando; Alvis, Nelson; Narváez, Javier; Cediel, Natalia; Gamboa, Oscar; Velandia, Martha
2010-05-14
A complete economic study was carried out to assess the economical impact of two rotavirus vaccine in Colombia. A Markov decision model was built to assess the health outcomes from birth to 24 months of age for three hypothetical cohorts: one unvaccinated, one vaccinated with 2 doses of Rotarix and the third, with 3 doses of Rotateq. Without vaccination, the annual number of medical visits by diarrhea in children under 2 years would be 1,293,159 cases, with 105,378 medical visits and 470 deaths (IC95% 295-560) related to rotavirus. Without vaccination, rotavirus disease would cost around USD$8 millions including direct and indirect costs. Assuming a cost per dose of USD$7.5, average cost-effectiveness ratio would be USD$663/DALY with Rotarix and USD$1391 with Rotateq. When price per dose falls below USD$7 both vaccines yield a similar average cost-effectiveness ratio (USD$1063/DALY). Incremental cost-effectiveness ratio of Rotateq versus Rotarix was USD$7787/DALY. Cost-effectiveness ratio was influenced mainly by vaccine cost and cost per case hospitalized. Other programmatic aspects such as number of doses to be applied, likelihood of completing vaccination schedule with shorter versus longer schedules, and storage space within the chain cold should be considered to make decisions on which vaccine should be introduced. In conclusion, vaccinating against rotavirus in Colombia with either vaccine would be very cost effective. If cost per vaccinated children falls below USD$3 per dose vaccination would be cost saving. Copyright 2010 Elsevier Ltd. All rights reserved.
Ko, Wilson; Tranbaugh, Robert; Marmur, Jonathan D.; Supino, Phyllis G.; Borer, Jeffrey S.
2012-01-01
Background During the past 2 decades, percutaneous coronary intervention (PCI) has increased dramatically compared with coronary artery bypass grafting (CABG) for patients with coronary artery disease. However, although the evidence available to all practitioners is similar, the relative distribution of PCI and CABG appears to differ among hospitals and regions. Methods and Results We reviewed the published data from the mandatory New York State Department of Health annual cardiac procedure reports issued from 1994 through 2008 to define trends in PCI and CABG utilization in New York and to compare the PCI/CABG ratios in the metropolitan area to the remainder of the State. During this 15-year interval, the procedure volume changes for CABG, for all cardiac surgeries, for non-CABG cardiac surgeries, and for PCI for New York State were −40%, −20%, +17.5%, and +253%, respectively; for the Manhattan programs, the changes were similar as follows: −61%, −23%, +14%, and +284%. The average PCI/CABG ratio in New York State increased from 1.12 in 1994 to 5.14 in 2008; however, in Manhattan, the average PCI/CABG ratio increased from 1.19 to 8.04 (2008 range: 3.78 to 16.2). The 2008 PCI/CABG ratios of the Manhattan programs were higher than the ratios for New York City programs outside Manhattan, in Long Island, in the northern counties contiguous to New York City, and in the rest of New York State; their averages were 5.84, 5.38, 3.31, and 3.24, respectively. In Manhattan, a patient had a 56% greater chance of receiving PCI than CABG as compared with the rest of New York State; in one Manhattan program, the likelihood was 215% higher. Conclusions There are substantial regional and statewide differences in the utilization of PCI versus CABG among cardiac centers in New York, possibly related to patient characteristics, physician biases, and hospital culture. Understanding these disparities may facilitate the selection of the most appropriate, effective, and evidence-based revascularization strategy. (J Am Heart Assoc. 2012;1:e001446 doi: 10.1161/JAHA.112.001446.) PMID:23130131
Ko, Wilson; Tranbaugh, Robert; Marmur, Jonathan D; Supino, Phyllis G; Borer, Jeffrey S
2012-04-01
During the past 2 decades, percutaneous coronary intervention (PCI) has increased dramatically compared with coronary artery bypass grafting (CABG) for patients with coronary artery disease. However, although the evidence available to all practitioners is similar, the relative distribution of PCI and CABG appears to differ among hospitals and regions. We reviewed the published data from the mandatory New York State Department of Health annual cardiac procedure reports issued from 1994 through 2008 to define trends in PCI and CABG utilization in New York and to compare the PCI/CABG ratios in the metropolitan area to the remainder of the State. During this 15-year interval, the procedure volume changes for CABG, for all cardiac surgeries, for non-CABG cardiac surgeries, and for PCI for New York State were -40%, -20%, +17.5%, and +253%, respectively; for the Manhattan programs, the changes were similar as follows: -61%, -23%, +14%, and +284%. The average PCI/CABG ratio in New York State increased from 1.12 in 1994 to 5.14 in 2008; however, in Manhattan, the average PCI/CABG ratio increased from 1.19 to 8.04 (2008 range: 3.78 to 16.2). The 2008 PCI/CABG ratios of the Manhattan programs were higher than the ratios for New York City programs outside Manhattan, in Long Island, in the northern counties contiguous to New York City, and in the rest of New York State; their averages were 5.84, 5.38, 3.31, and 3.24, respectively. In Manhattan, a patient had a 56% greater chance of receiving PCI than CABG as compared with the rest of New York State; in one Manhattan program, the likelihood was 215% higher. There are substantial regional and statewide differences in the utilization of PCI versus CABG among cardiac centers in New York, possibly related to patient characteristics, physician biases, and hospital culture. Understanding these disparities may facilitate the selection of the most appropriate, effective, and evidence-based revascularization strategy. (J Am Heart Assoc. 2012;1:e001446 doi: 10.1161/JAHA.112.001446.).
Predictors of intraoperative hypotension and bradycardia.
Cheung, Christopher C; Martyn, Alan; Campbell, Norman; Frost, Shaun; Gilbert, Kenneth; Michota, Franklin; Seal, Douglas; Ghali, William; Khan, Nadia A
2015-05-01
Perioperative hypotension and bradycardia in the surgical patient are associated with adverse outcomes, including stroke. We developed and evaluated a new preoperative risk model in predicting intraoperative hypotension or bradycardia in patients undergoing elective noncardiac surgery. Prospective data were collected in 193 patients undergoing elective, noncardiac surgery. Intraoperative hypotension was defined as systolic blood pressure <90 mm Hg for >5 minutes or a 35% decrease in the mean arterial blood pressure. Intraoperative bradycardia was defined as a heart rate of <60 beats/min for >5 minutes. A logistic regression model was developed for predicting intraoperative hypotension or bradycardia with bootstrap validation. Model performance was assessed using area under the receiver operating curves and Hosmer-Lemeshow tests. A total of 127 patients developed hypotension or bradycardia. The average age of participants was 67.6 ± 11.3 years, and 59.1% underwent major surgery. A final 5-item score was developed, including preoperative Heart rate (<60 beats/min), preoperative hypotension (<110/60 mm Hg), Elderly age (>65 years), preoperative renin-Angiotensin blockade (angiotensin-converting enzyme inhibitors, angiotensin receptor blockers, or beta-blockers), Revised cardiac risk index (≥3 points), and Type of surgery (major surgery), entitled the "HEART" score. The HEART score was moderately predictive of intraoperative bradycardia or hypotension (odds ratio, 2.51; 95% confidence interval, 1.79-3.53; C-statistic, 0.75). Maximum points on the HEART score were associated with an increased likelihood ratio for intraoperative bradycardia or hypotension (likelihood ratio, +3.64). The 5-point HEART score was predictive of intraoperative hypotension or bradycardia. These findings suggest a role for using the HEART score to better risk-stratify patients preoperatively and may help guide decisions on perioperative management of blood pressure and heart rate-lowering medications and anesthetic agents. Copyright © 2015 Elsevier Inc. All rights reserved.
Liou, Kevin; Negishi, Kazuaki; Ho, Suyen; Russell, Elizabeth A; Cranney, Greg; Ooi, Sze-Yuan
2016-08-01
Global longitudinal strain (GLS) is well validated and has important applications in contemporary clinical practice. The aim of this analysis was to evaluate the accuracy of resting peak GLS in the diagnosis of obstructive coronary artery disease (CAD). A systematic literature search was performed through July 2015 using four databases. Data were extracted independently by two authors and correlated before analyses. Using a random-effect model, the pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and summary area under the curve for GLS were estimated with their respective 95% CIs. Screening of 1,669 articles yielded 10 studies with 1,385 patients appropriate for inclusion in the analysis. The mean age and left ventricular ejection fraction were 59.9 years and 61.1%. On the whole, 54.9% and 20.9% of the patients had hypertension and diabetes, respectively. Overall, abnormal GLS detected moderate to severe CAD with a pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of 74.4%, 72.1%, 2.9, and 0.35 respectively. The area under the curve and diagnostic odds ratio were 0.81 and 8.5. The mean values of GLS for those with and without CAD were -16.5% (95% CI, -15.8% to -17.3%) and -19.7% (95% CI, -18.8% to -20.7%), respectively. Subgroup analyses for patients with severe CAD and normal left ventricular ejection fractions yielded similar results. Current evidence supports the use of GLS in the detection of moderate to severe obstructive CAD in symptomatic patients. GLS may complement existing diagnostic algorithms and act as an early adjunctive marker of cardiac ischemia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Zhang, Luqing; Yang, Duoxing; Braun, Anika; Han, Zhenhua
2017-01-01
Granite is a typical crystalline material, often used as a building material, but also a candidate host rock for the repository of high-level radioactive waste. The petrographic texture—including mineral constituents, grain shape, size, and distribution—controls the fracture initiation, propagation, and coalescence within granitic rocks. In this paper, experimental laboratory tests and numerical simulations of a grain-based approach in two-dimensional Particle Flow Code (PFC2D) were conducted on the mechanical strength and failure behavior of Alashan granite, in which the grain-like structure of granitic rock was considered. The microparameters for simulating Alashan granite were calibrated based on real laboratory strength values and strain-stress curves. The unconfined uniaxial compressive test and Brazilian indirect tensile test were performed using a grain-based approach to examine and discuss the influence of mineral grain size and distribution on the strength and patterns of microcracks in granitic rocks. The results show it is possible to reproduce the uniaxial compressive strength (UCS) and uniaxial tensile strength (UTS) of Alashan granite using the grain-based approach in PFC2D, and the average mineral size has a positive relationship with the UCS and UTS. During the modeling, most of the generated microcracks were tensile cracks. Moreover, the ratio of the different types of generated microcracks is related to the average grain size. When the average grain size in numerical models is increased, the ratio of the number of intragrain tensile cracks to the number of intergrain tensile cracks increases, and the UCS of rock samples also increases with this ratio. However, the variation in grain size distribution does not have a significant influence on the likelihood of generated microcracks. PMID:28773201
Zhou, Jian; Zhang, Luqing; Yang, Duoxing; Braun, Anika; Han, Zhenhua
2017-07-21
Granite is a typical crystalline material, often used as a building material, but also a candidate host rock for the repository of high-level radioactive waste. The petrographic texture-including mineral constituents, grain shape, size, and distribution-controls the fracture initiation, propagation, and coalescence within granitic rocks. In this paper, experimental laboratory tests and numerical simulations of a grain-based approach in two-dimensional Particle Flow Code (PFC2D) were conducted on the mechanical strength and failure behavior of Alashan granite, in which the grain-like structure of granitic rock was considered. The microparameters for simulating Alashan granite were calibrated based on real laboratory strength values and strain-stress curves. The unconfined uniaxial compressive test and Brazilian indirect tensile test were performed using a grain-based approach to examine and discuss the influence of mineral grain size and distribution on the strength and patterns of microcracks in granitic rocks. The results show it is possible to reproduce the uniaxial compressive strength (UCS) and uniaxial tensile strength (UTS) of Alashan granite using the grain-based approach in PFC2D, and the average mineral size has a positive relationship with the UCS and UTS. During the modeling, most of the generated microcracks were tensile cracks. Moreover, the ratio of the different types of generated microcracks is related to the average grain size. When the average grain size in numerical models is increased, the ratio of the number of intragrain tensile cracks to the number of intergrain tensile cracks increases, and the UCS of rock samples also increases with this ratio. However, the variation in grain size distribution does not have a significant influence on the likelihood of generated microcracks.
Xu, Mei-Mei; Jia, Hong-Yu; Yan, Li-Li; Li, Shan-Shan; Zheng, Yue
2017-01-01
Abstract Background: This meta-analysis aimed to provide a pooled analysis of prospective controlled trials comparing the diagnostic accuracy of 22-G and 25-G needles on endoscopic ultrasonography (EUS-FNA) of the solid pancreatic mass. Methods: We established a rigorous study protocol according to Cochrane Collaboration recommendations. We systematically searched the PubMed and Embase databases to identify articles to include in the meta-analysis. Sensitivity, specificity, and corresponding 95% confidence intervals were calculated for 22-G and 25-G needles of individual studies from the contingency tables. Results: Eleven prospective controlled trials included a total of 837 patients (412 with 22-G vs 425 with 25-G). Our outcomes revealed that 25-G needles (92% [95% CI, 89%–95%]) have higher sensitivity than 22-G needles (88% [95% CI, 84%–91%]) on solid pancreatic mass EUS-FNA (P = 0.046). However, there were no significant differences between the 2 groups in overall diagnostic specificity (P = 0.842). The pooled positive and negative likelihood ratio of the 22-G needle were 12.61 (95% CI, 5.65–28.14) and 0.16 (95% CI, 0.12–0.21), respectively. The pooled positive likelihood ratio was 12.61 (95% CI, 5.65–28.14), and the negative likelihood ratio was 0.16 (95% CI, 0.12–0.21) for the 22-G needle. The pooled positive likelihood ratio was 8.44 (95% CI, 3.87–18.42), and the negative likelihood ratio was 0.13 (95% CI, 0.09–0.18) for the 25-G needle. The area under the summary receiver operating characteristic curve was 0.97 for the 22-G needle and 0.96 for the 25-G needle. Conclusion: Compared to the study of 22-G EUS-FNA needles, our study showed that 25-G needles have superior sensitivity in the evaluation of solid pancreatic lesions by EUS–FNA. PMID:28151856
Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.
Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria
2009-02-01
A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
1982-04-01
S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES
A Likelihood Ratio Test Regarding Two Nested But Oblique Order Restricted Hypotheses.
1982-11-01
Report #90 DIC JAN 2 411 ISMO. H American Mathematical Society 1979 subject classification Primary 62F03 Secondary 62E15 Key words and phrases: Order...model. A likelihood ratio test for these two restrictions is studied . Asa *a .on . r 373 RA&J *iii - ,sa~m muwod [] v~ -F: :.v"’. os "- 1...investigation was stimulated partly by a problem encountered in psychiatric research. [Winokur et al., 1971] studied data on psychiatric illnesses afflicting
Li, Xiang; Kuk, Anthony Y C; Xu, Jinfeng
2014-12-10
Human biomonitoring of exposure to environmental chemicals is important. Individual monitoring is not viable because of low individual exposure level or insufficient volume of materials and the prohibitive cost of taking measurements from many subjects. Pooling of samples is an efficient and cost-effective way to collect data. Estimation is, however, complicated as individual values within each pool are not observed but are only known up to their average or weighted average. The distribution of such averages is intractable when the individual measurements are lognormally distributed, which is a common assumption. We propose to replace the intractable distribution of the pool averages by a Gaussian likelihood to obtain parameter estimates. If the pool size is large, this method produces statistically efficient estimates, but regardless of pool size, the method yields consistent estimates as the number of pools increases. An empirical Bayes (EB) Gaussian likelihood approach, as well as its Bayesian analog, is developed to pool information from various demographic groups by using a mixed-effect formulation. We also discuss methods to estimate the underlying mean-variance relationship and to select a good model for the means, which can be incorporated into the proposed EB or Bayes framework. By borrowing strength across groups, the EB estimator is more efficient than the individual group-specific estimator. Simulation results show that the EB Gaussian likelihood estimates outperform a previous method proposed for the National Health and Nutrition Examination Surveys with much smaller bias and better coverage in interval estimation, especially after correction of bias. Copyright © 2014 John Wiley & Sons, Ltd.
Large signal-to-noise ratio quantification in MLE for ARARMAX models
NASA Astrophysics Data System (ADS)
Zou, Yiqun; Tang, Xiafei
2014-06-01
It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.
Slower nicotine metabolism among postmenopausal Polish smokers.
Kosmider, Leon; Delijewski, Marcin; Koszowski, Bartosz; Sobczak, Andrzej; Benowitz, Neal L; Goniewicz, Maciej L
2018-06-01
A non-invasive phenotypic indicator of the rate of nicotine metabolism is nicotine metabolite ratio (NMR) defined as a ratio of two major metabolites of nicotine - trans-3'-hydroxycotinine/cotinine. The rate of nicotine metabolism has important clinical implications for the likelihood of successful quitting with nicotine replacement therapy (NRT). We conducted a study to measure NMR among Polish smokers. In a cross-sectional study of 180 daily cigarette smokers (42% men; average age 34.6±13.0), we collected spot urine samples and measured trans-3'-hydroxycotinine (3-HC) and cotinine levels with LC-MS/MS method. We calculated NMR (molar ratio) and analyzed variations in NMR among groups of smokers. In the whole study group, an average NMR was 4.8 (IQR 3.4-7.3). The group of women below 51 years had significantly greater NMR compared to the rest of the population (6.4; IQR 4.1-8.8 vs. 4.3; IQR 2.8-6.4). No differences were found among group ages of male smokers. This is a first study to describe variations in nicotine metabolism among Polish smokers. Our findings indicate that young women metabolize nicotine faster than the rest of population. This finding is consistent with the known effects of estrogen to induce CYP2A6 activity. Young women may require higher doses of NRT or non-nicotine medications for most effective smoking cessation treatment. Copyright © 2017 Institute of Pharmacology, Polish Academy of Sciences. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
Order-restricted inference for means with missing values.
Wang, Heng; Zhong, Ping-Shou
2017-09-01
Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.
Bazot, Marc; Daraï, Emile
2018-03-01
The aim of the present review, conducted according to PRISMA statement recommendations, was to evaluate the contribution of transvaginal sonography (TVS) and magnetic resonance imaging (MRI) to diagnose adenomyosis. Although there is a lack of consensus on adenomyosis classification, three subtypes are described, internal, external adenomyosis, and adenomyomas. Using TVS, whatever the subtype, pooled sensitivities, pooled specificities, and pooled positive likelihood ratios are 0.72-0.82, 0.85-0.81, and 4.67-3.7, respectively, but with a high heterogeneity between the studies. MRI has a pooled sensitivity of 0.77, specificity of 0.89, positive likelihood ratio of 6.5, and negative likelihood ratio of 0.2 for all subtypes. Our results suggest that MRI is more useful than TVS in the diagnosis of adenomyosis. Further studies are required to determine the performance of direct signs (cystic component) and indirect signs (characteristics of junctional zone) to avoid misdiagnosis of adenomyosis. Copyright © 2018 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Sun, Changling; Zhang, Yayun; Han, Xue; Du, Xiaodong
2018-03-01
Objective The purposes of this study were to verify the effectiveness of the narrow band imaging (NBI) system in diagnosing nasopharyngeal cancer (NPC) as compared with white light endoscopy. Data Sources PubMed, Cochrane Library, EMBASE, CNKI, and Wan Fang databases. Review Methods Data analyses were performed with Meta-Disc. The updated Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess study quality and potential bias. Publication bias was assessed with a Deeks asymmetry test. The registry number of the protocol published on PROSPERO is CRD42015026244. Results This meta-analysis included 10 studies of 1337 lesions. For NBI diagnosis of NPC, the pooled values were as follows: sensitivity, 0.83 (95% CI, 0.80-0.86); specificity, 0.91 (95% CI, 0.89-0.93); positive likelihood ratio, 8.82 (95% CI, 5.12-15.21); negative likelihood ratio, 0.18 (95% CI, 0.12-0.27); and diagnostic odds ratio, 65.73 (95% CI, 36.74-117.60). The area under the curve was 0.9549. For white light endoscopy in diagnosing NPC, the pooled values were as follows: sensitivity, 0.79 (95% CI, 0.75-0.83); specificity, 0.87 (95% CI, 0.84-0.90); positive likelihood ratio, 5.02 (95% CI, 1.99-12.65); negative likelihood ratio, 0.34 (95% CI, 0.24-0.49); and diagnostic odds ratio, 16.89 (95% CI, 5.98-47.66). The area under the curve was 0.8627. The evaluation of heterogeneity, calculated per the diagnostic odds ratio, gave an I 2 of 0.326. No marked publication bias ( P = .68) existed in this meta-analysis. Conclusion The sensitivity and specificity of NBI for the diagnosis of NPC are similar to those of white light endoscopy, and the potential value of NBI for the diagnosis of NPC needs to be validated further.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-08-01
(1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2-9 days) and 7 chronic time windows (14-35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R 2 ). The ratio of moderate speed running workload (18-24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R 2 =0.79) and in the immediate 2 or 5 days following matches (R 2 =0.76-0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98-2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E
2017-01-01
Aims (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Methods Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2–9 days) and 7 chronic time windows (14–35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R2). Results The ratio of moderate speed running workload (18–24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R2=0.79) and in the immediate 2 or 5 days following matches (R2=0.76–0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98–2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Conclusions Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. PMID:27789430
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Sample Size Bias in Judgments of Perceptual Averages
ERIC Educational Resources Information Center
Price, Paul C.; Kimura, Nicole M.; Smith, Andrew R.; Marshall, Lindsay D.
2014-01-01
Previous research has shown that people exhibit a sample size bias when judging the average of a set of stimuli on a single dimension. The more stimuli there are in the set, the greater people judge the average to be. This effect has been demonstrated reliably for judgments of the average likelihood that groups of people will experience negative,…
Is there a successful business case for telepharmacy?
Khan, Shamima; Snyder, Herbert W; Rathke, Ann M; Scott, David M; Peterson, Charles D
2008-04-01
The purpose of this study was to assess the financial operation of a Single Business Unit (SBU), consisting of one central retail pharmacy and two remote retail telepharmacies. Analyses of income statements and balance sheets for three consecutive years (2002-2004) were conducted. Several items from these statements were compared to the industry average. Gross profit increased from $260,093 in 2002 to $502,262 in 2004. The net operating income percent was 2.9 percentage points below the industry average in 2002, 3.9 percentage points below in 2003, and 1.3 percentage points above in 2004. The inventory turnover ratio remained consistently below the industry average, but it also increased over the period. This is an area of concern, given the high cost of pharmaceuticals and a higher likelihood of obsolescence that exists with a time-sensitive inventory. Despite these concerns, the overall trend for the SBU is positive. The rate of growth between 2002 and 2004 shows that it is getting close to median sales as reported in the NCPA Digest. The results of this study indicate that multiple locations become profitable when a sufficient volume of patients (sales) is reached, combined with efficient use of the pharmacist's time.
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J
2017-08-01
Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.
Predictors and overestimation of recalled mobile phone use among children and adolescents.
Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klæboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin
2011-12-01
A growing body of literature addresses possible health effects of mobile phone use in children and adolescents by relying on the study participants' retrospective reconstruction of mobile phone use. In this study, we used data from the international case-control study CEFALO to compare self-reported with objectively operator-recorded mobile phone use. The aim of the study was to assess predictors of level of mobile phone use as well as factors that are associated with overestimating own mobile phone use. For cumulative number and duration of calls as well as for time since first subscription we calculated the ratio of self-reported to operator-recorded mobile phone use. We used multiple linear regression models to assess possible predictors of the average number and duration of calls per day and logistic regression models to assess possible predictors of overestimation. The cumulative number and duration of calls as well as the time since first subscription of mobile phones were overestimated on average by the study participants. Likelihood to overestimate number and duration of calls was not significantly different for controls compared to cases (OR=1.1, 95%-CI: 0.5 to 2.5 and OR=1.9, 95%-CI: 0.85 to 4.3, respectively). However, likelihood to overestimate was associated with other health related factors such as age and sex. As a consequence, such factors act as confounders in studies relying solely on self-reported mobile phone use and have to be considered in the analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dai, Cong; Jiang, Min; Sun, Ming-Jun; Cao, Qin
2018-05-01
Fecal immunochemical test (FIT) is a promising marker for assessment of inflammatory bowel disease activity. However, the utility of FIT for predicting mucosal healing (MH) of ulcerative colitis (UC) patients has yet to be clearly demonstrated. The objective of our study was to perform a diagnostic test accuracy test meta-analysis evaluating the diagnostic accuracy of FIT in predicting MH of UC patients. We systematically searched the databases from inception to November 2017 that evaluated MH in UC. The methodological quality of each study was assessed according to the Quality Assessment of Diagnostic Accuracy Studies checklist. The extracted data were pooled using a summary receiver operating characteristic curve model. Random-effects model was used to summarize the diagnostic odds ratio, sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Six studies comprising 625 UC patients were included in the meta-analysis. The pooled sensitivity and specificity values for predicting MH in UC were 0.77 (95% confidence interval [CI], 0.72-0.81) and 0.81 (95% CI, 0.76-0.85), respectively. The FIT level had a high rule-in value (positive likelihood ratio, 3.79; 95% CI, 2.85-5.03) and a moderate rule-out value (negative likelihood ratio, 0.26; 95% CI, 0.16-0.43) for predicting MH in UC. The results of the receiver operating characteristic curve analysis (area under the curve, 0.88; standard error of the mean, 0.02) and diagnostic odds ratio (18.08; 95% CI, 9.57-34.13) also revealed improved discrimination for identifying MH in UC with FIT concentration. Our meta-analysis has found that FIT is a simple, reliable non-invasive marker for predicting MH in UC patients. © 2018 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
A likelihood ratio test for evolutionary rate shifts and functional divergence among proteins
Knudsen, Bjarne; Miyamoto, Michael M.
2001-01-01
Changes in protein function can lead to changes in the selection acting on specific residues. This can often be detected as evolutionary rate changes at the sites in question. A maximum-likelihood method for detecting evolutionary rate shifts at specific protein positions is presented. The method determines significance values of the rate differences to give a sound statistical foundation for the conclusions drawn from the analyses. A statistical test for detecting slowly evolving sites is also described. The methods are applied to a set of Myc proteins for the identification of both conserved sites and those with changing evolutionary rates. Those positions with conserved and changing rates are related to the structures and functions of their proteins. The results are compared with an earlier Bayesian method, thereby highlighting the advantages of the new likelihood ratio tests. PMID:11734650
Martell, R F; Desmet, A L
2001-12-01
This study departed from previous research on gender stereotyping in the leadership domain by adopting a more comprehensive view of leadership and using a diagnostic-ratio measurement strategy. One hundred and fifty-one managers (95 men and 56 women) judged the leadership effectiveness of male and female middle managers by providing likelihood ratings for 14 categories of leader behavior. As expected, the likelihood ratings for some leader behaviors were greater for male managers, whereas for other leader behaviors, the likelihood ratings were greater for female managers or were no different. Leadership ratings revealed some evidence of a same-gender bias. Providing explicit verification of managerial success had only a modest effect on gender stereotyping. The merits of adopting a probabilistic approach in examining the perception and treatment of stigmatized groups are discussed.
Identical twins in forensic genetics - Epidemiology and risk based estimation of weight of evidence.
Tvedebrink, Torben; Morling, Niels
2015-12-01
The increase in the number of forensic genetic loci used for identification purposes results in infinitesimal random match probabilities. These probabilities are computed under assumptions made for rather simple population genetic models. Often, the forensic expert reports likelihood ratios, where the alternative hypothesis is assumed not to encompass close relatives. However, this approach implies that important factors present in real human populations are discarded. This approach may be very unfavourable to the defendant. In this paper, we discuss some important aspects concerning the closest familial relationship, i.e., identical (monozygotic) twins, when reporting the weight of evidence. This can be done even when the suspect has no knowledge of an identical twin or when official records hold no twin information about the suspect. The derived expressions are not original as several authors previously have published results accounting for close familial relationships. However, we revisit the discussion to increase the awareness among forensic genetic practitioners and include new information on medical and societal factors to assess the risk of not considering a monozygotic twin as the true perpetrator. If accounting for a monozygotic twin in the weight of evidence, it implies that the likelihood ratio is truncated at a maximal value depending on the prevalence of monozygotic twins and the societal efficiency of recognising a monozygotic twin. If a monozygotic twin is considered as an alternative proposition, then data relevant for the Danish society suggests that the threshold of likelihood ratios should approximately be between 150,000 and 2,000,000 in order to take the risk of an unrecognised identical, monozygotic twin into consideration. In other societies, the threshold of the likelihood ratio in crime cases may reach other, often lower, values depending on the recognition of monozygotic twins and the age of the suspect. In general, more strictly kept registries will imply larger thresholds on the likelihood ratio as the monozygotic twin explanation gets less probable. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
flowVS: channel-specific variance stabilization in flow cytometry.
Azad, Ariful; Rajwa, Bartek; Pothen, Alex
2016-07-28
Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances. We present a variance-stabilization algorithm, called flowVS, that removes the mean-variance correlations from cell populations identified in each fluorescence channel. flowVS transforms each channel from all samples of a data set by the inverse hyperbolic sine (asinh) transformation. For each channel, the parameters of the transformation are optimally selected by Bartlett's likelihood-ratio test so that the populations attain homogeneous variances. The optimum parameters are then used to transform the corresponding channels in every sample. flowVS is therefore an explicit variance-stabilization method that stabilizes within-population variances in each channel by evaluating the homoskedasticity of clusters with a likelihood-ratio test. With two publicly available datasets, we show that flowVS removes the mean-variance dependence from raw FC data and makes the within-population variance relatively homogeneous. We demonstrate that alternative transformation techniques such as flowTrans, flowScape, logicle, and FCSTrans might not stabilize variance. Besides flow cytometry, flowVS can also be applied to stabilize variance in microarray data. With a publicly available data set we demonstrate that flowVS performs as well as the VSN software, a state-of-the-art approach developed for microarrays. The homogeneity of variance in cell populations across FC samples is desirable when extracting features uniformly and comparing cell populations with different levels of marker expressions. The newly developed flowVS algorithm solves the variance-stabilization problem in FC and microarrays by optimally transforming data with the help of Bartlett's likelihood-ratio test. On two publicly available FC datasets, flowVS stabilizes within-population variances more evenly than the available transformation and normalization techniques. flowVS-based variance stabilization can help in performing comparison and alignment of phenotypically identical cell populations across different samples. flowVS and the datasets used in this paper are publicly available in Bioconductor.
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Diagnostic Capability of Spectral Domain Optical Coherence Tomography for Glaucoma
Wu, Huijuan; de Boer, Johannes F.; Chen, Teresa C.
2012-01-01
Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (OCT) in glaucoma patients with visual field (VF) defects. Design Prospective, cross-sectional study. Methods Setting Participants were recruited from a university hospital clinic. Study Population One eye of 85 normal subjects and 61 glaucoma patients [with average VF mean deviation (MD) of -9.61 ± 8.76 dB] were randomly selected for the study. A subgroup of the glaucoma patients with early VF defects was calculated separately. Observation Procedures Spectralis OCT circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. Main Outcome Measures To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve (AROC), sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Results Overall RNFL thickness had the highest AROC value (0.952 for all patients, 0.895 for the early glaucoma subgroup). For all patients, the highest sensitivity (98.4%, CI 96.3-100%) was achieved by using two criteria: ≥1 RNFL sectors being abnormal at the < 5% level, and overall classification of borderline or outside normal limits, with specificities of 88.9% (CI 84.0-94.0%) and 87.1% (CI 81.6-92.5%) respectively for these two criteria. Conclusions Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral domain OCT were good for early perimetric glaucoma and excellent for moderately-advanced perimetric glaucoma. PMID:22265147
Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Garudadri, Chandra S; Rao, Harsha L
2016-02-01
To compare the abilities of retinal nerve fiber layer (RNFL) parameters of variable corneal compensation (VCC) and enhanced corneal compensation (ECC) algorithms of scanning laser polarimetry (GDx) in detecting various severities of glaucoma. Two hundred and eighty-five eyes of 194 subjects from the Longitudinal Glaucoma Evaluation Study who underwent GDx VCC and ECC imaging were evaluated. Abilities of RNFL parameters of GDx VCC and ECC to diagnose glaucoma were compared using area under receiver operating characteristic curves (AUC), sensitivities at fixed specificities, and likelihood ratios. After excluding 5 eyes that failed to satisfy manufacturer-recommended quality parameters with ECC and 68 with VCC, 56 eyes of 41 normal subjects and 161 eyes of 121 glaucoma patients [36 eyes with preperimetric glaucoma, 52 eyes with early (MD>-6 dB), 34 with moderate (MD between -6 and -12 dB), and 39 with severe glaucoma (MD<-12 dB)] were included for the analysis. Inferior RNFL, average RNFL, and nerve fiber indicator parameters showed the best AUCs and sensitivities both with GDx VCC and ECC in diagnosing all severities of glaucoma. AUCs and sensitivities of all RNFL parameters were comparable between the VCC and ECC algorithms (P>0.20 for all comparisons). Likelihood ratios associated with the diagnostic categorization of RNFL parameters were comparable between the VCC and ECC algorithms. In scans satisfying the manufacturer-recommended quality parameters, which were significantly greater with ECC than VCC algorithm, diagnostic abilities of GDx ECC and VCC in glaucoma were similar.
The status of women at one academic medical center. Breaking through the glass ceiling.
Nickerson, K G; Bennett, N M; Estes, D; Shea, S
1990-10-10
Despite recent gains in admission to medical school and in obtaining junior faculty positions, women remain underrepresented at senior academic ranks and in leadership positions in medicine. This discrepancy has been interpreted as evidence of a "glass ceiling" that prevents all but a few exceptional women from gaining access to leadership positions. We analyzed data from Columbia University College of Physicians & Surgeons, New York, NY, for all faculty hired from 1969 through 1988 and found that the likelihood of promotion on the tenure track was 0.40 for women and 0.48 for men (ratio, 0.82; 95% confidence interval, 0.56 to 1.20); on the clinical track the likelihood of promotion was 0.75 for women and 0.72 for men (ratio, 1.04; 95% confidence interval, 0.56 to 1.94). Additional analysis of current faculty showed that in the academic year 1988-1989 the proportion of women at each tenure track rank at the College of Physicians & Surgeons equaled or exceeded the national proportion of women graduating from medical school, once allowance was made for the average time lag necessary to attain each rank. On the clinical track women were somewhat overrepresented, particularly at the junior rank. National data that describe medical school faculty, which combine tenure and clinical tracks, showed that in 1988 women were proportionately represented at each rank once the lead time from graduation was considered. We conclude that objective evidence shows that women can succeed and are succeeding in gaining promotions in academic medicine.
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
ERIC Educational Resources Information Center
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
A Note on Three Statistical Tests in the Logistic Regression DIF Procedure
ERIC Educational Resources Information Center
Paek, Insu
2012-01-01
Although logistic regression became one of the well-known methods in detecting differential item functioning (DIF), its three statistical tests, the Wald, likelihood ratio (LR), and score tests, which are readily available under the maximum likelihood, do not seem to be consistently distinguished in DIF literature. This paper provides a clarifying…
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
A New Monte Carlo Method for Estimating Marginal Likelihoods.
Wang, Yu-Bo; Chen, Ming-Hui; Kuo, Lynn; Lewis, Paul O
2018-06-01
Evaluating the marginal likelihood in Bayesian analysis is essential for model selection. Estimators based on a single Markov chain Monte Carlo sample from the posterior distribution include the harmonic mean estimator and the inflated density ratio estimator. We propose a new class of Monte Carlo estimators based on this single Markov chain Monte Carlo sample. This class can be thought of as a generalization of the harmonic mean and inflated density ratio estimators using a partition weighted kernel (likelihood times prior). We show that our estimator is consistent and has better theoretical properties than the harmonic mean and inflated density ratio estimators. In addition, we provide guidelines on choosing optimal weights. Simulation studies were conducted to examine the empirical performance of the proposed estimator. We further demonstrate the desirable features of the proposed estimator with two real data sets: one is from a prostate cancer study using an ordinal probit regression model with latent variables; the other is for the power prior construction from two Eastern Cooperative Oncology Group phase III clinical trials using the cure rate survival model with similar objectives.
NASA Astrophysics Data System (ADS)
Núñez, M.; Robie, T.; Vlachos, D. G.
2017-10-01
Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Measures of accuracy and performance of diagnostic tests.
Drobatz, Kenneth J
2009-05-01
Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Impact of traffic oscillations on freeway crash occurrences.
Zheng, Zuduo; Ahn, Soyoung; Monsere, Christopher M
2010-03-01
Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion. Published by Elsevier Ltd.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...
Adult Age Differences in Frequency Estimations of Happy and Angry Faces
ERIC Educational Resources Information Center
Nikitin, Jana; Freund, Alexandra M.
2015-01-01
With increasing age, the ratio of gains to losses becomes more negative, which is reflected in expectations that positive events occur with a high likelihood in young adulthood, whereas negative events occur with a high likelihood in old age. Little is known about expectations of social events. Given that younger adults are motivated to establish…
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Johnston, Heidi Bart; Ganatra, Bela; Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Lema, Hailu Yeneneh; Constant, Deborah; Sen, Swapnaleen
2016-01-01
To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Diagnostic accuracy study. Ethiopia, India and South Africa. Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability.
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
Sviklāne, Laura; Olmane, Evija; Dzērve, Zane; Kupčs, Kārlis; Pīrāgs, Valdis; Sokolovska, Jeļizaveta
2018-01-01
Little is known about the diagnostic value of hepatic steatosis index (HSI) and fatty liver index (FLI), as well as their link to metabolic syndrome in type 1 diabetes mellitus. We have screened the effectiveness of FLI and HSI in an observational pilot study of 40 patients with type 1 diabetes. FLI and HSI were calculated for 201 patients with type 1 diabetes. Forty patients with FLI/HSI values corresponding to different risk of liver steatosis were invited for liver magnetic resonance study. In-phase/opposed-phase technique of magnetic resonance was used. Accuracy of indices was assessed from the area under the receiver operating characteristic curve. Twelve (30.0%) patients had liver steatosis. For FLI, sensitivity was 90%; specificity, 74%; positive likelihood ratio, 3.46; negative likelihood ratio, 0.14; positive predictive value, 0.64; and negative predictive value, 0.93. For HSI, sensitivity was 86%; specificity, 66%; positive likelihood ratio, 1.95; negative likelihood ratio, 0.21; positive predictive value, 0.50; and negative predictive value, 0.92. Area under the receiver operating characteristic curve for FLI was 0.86 (95% confidence interval [0.72; 0.99]); for HSI 0.75 [0.58; 0.91]. Liver fat correlated with liver enzymes, waist circumference, triglycerides, and C-reactive protein. FLI correlated with C-reactive protein, liver enzymes, and blood pressure. HSI correlated with waist circumference and C-reactive protein. FLI ≥ 60 and HSI ≥ 36 were significantly associated with metabolic syndrome and nephropathy. The tested indices, especially FLI, can serve as surrogate markers for liver fat content and metabolic syndrome in type 1 diabetes. © 2017 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.
Dasgupta, Subhankar; Dasgupta, Shyamal; Sharma, Partha Pratim; Mukherjee, Amitabha; Ghosh, Tarun Kumar
2011-11-01
To investigate the effect of oral progesterone on the accuracy of imaging studies performed to detect endometrial pathology in comparison to hysteroscopy-guided biopsy in perimenopausal women on progesterone treatment for abnormal uterine bleeding. The study population comprised of women aged 40-55 years with complaints of abnormal uterine bleeding who were also undergoing oral progesterone therapy. Women with a uterus ≥ 12 weeks' gestation size, previous abnormal endometrial biopsy, cervical lesion on speculum examination, abnormal Pap smear, active pelvic infection, adnexal mass on clinical examination or during ultrasound scan and a positive pregnancy test were excluded. A transvaginal ultrasound followed by saline infusion sonography were done. On the following day, a hysteroscopy followed by a guided biopsy of the endometrium or any endometrial lesion was performed. Comparison between the results of the imaging study with the hysteroscopy and guided biopsy was done. The final analysis included 83 patients. For detection of overall pathology, polyp and fibroid transvaginal ultrasound had a positive likelihood ratio of 1.65, 5.45 and 5.4, respectively, and a negative likelihood ratio of 0.47, 0.6 and 0.43, respectively. For detection of overall pathology, polyp and fibroid saline infusion sonography had a positive likelihood ratio of 4.4, 5.35 and 11.8, respectively, and a negative likelihood ratio of 0.3, 0.2 and 0.15, respectively. In perimenopausal women on oral progesterone therapy for abnormal uterine bleeding, imaging studies cannot be considered as an accurate method for diagnosing endometrial pathology when compared to hysteroscopy and guided biopsy. © 2011 The Authors. Journal of Obstetrics and Gynaecology Research © 2011 Japan Society of Obstetrics and Gynecology.
Using DNA fingerprints to infer familial relationships within NHANES III households
Katki, Hormuzd A.; Sanders, Christopher L.; Graubard, Barry I.; Bergen, Andrew W.
2009-01-01
Developing, targeting, and evaluating genomic strategies for population-based disease prevention require population-based data. In response to this urgent need, genotyping has been conducted within the Third National Health and Nutrition Examination (NHANES III), the nationally-representative household-interview health survey in the U.S. However, before these genetic analyses can occur, family relationships within households must be accurately ascertained. Unfortunately, reported family relationships within NHANES III households based on questionnaire data are incomplete and inconclusive with regards to actual biological relatedness of family members. We inferred family relationships within households using DNA fingerprints (Identifiler®) that contain the DNA loci used by law enforcement agencies for forensic identification of individuals. However, performance of these loci for relationship inference is not well understood. We evaluated two competing statistical methods for relationship inference on pairs of household members: an exact likelihood ratio relying on allele frequencies to an Identical By State (IBS) likelihood ratio that only requires matching alleles. We modified these methods to account for genotyping errors and population substructure. The two methods usually agree on the rankings of the most likely relationships. However, the IBS method underestimates the likelihood ratio by not accounting for the informativeness of matching rare alleles. The likelihood ratio is sensitive to estimates of population substructure, and parent-child relationships are sensitive to the specified genotyping error rate. These loci were unable to distinguish second-degree relationships and cousins from being unrelated. The genetic data is also useful for verifying reported relationships and identifying data quality issues. An important by-product is the first explicitly nationally-representative estimates of allele frequencies at these ubiquitous forensic loci. PMID:20664713
Early pregnancy angiogenic markers and spontaneous abortion: an Odense Child Cohort study.
Andersen, Louise B; Dechend, Ralf; Karumanchi, S Ananth; Nielsen, Jan; Joergensen, Jan S; Jensen, Tina K; Christesen, Henrik T
2016-11-01
Spontaneous abortion is the most commonly observed adverse pregnancy outcome. The angiogenic factors soluble Fms-like kinase 1 and placental growth factor are critical for normal pregnancy and may be associated to spontaneous abortion. We investigated the association between maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor, and subsequent spontaneous abortion. In the prospective observational Odense Child Cohort, 1676 pregnant women donated serum in early pregnancy, gestational week <22 (median 83 days of gestation, interquartile range 71-103). Concentrations of soluble Fms-like kinase 1 and placental growth factor were determined with novel automated assays. Spontaneous abortion was defined as complete or incomplete spontaneous abortion, missed abortion, or blighted ovum <22+0 gestational weeks, and the prevalence was 3.52% (59 cases). The time-dependent effect of maternal serum concentrations of soluble Fms-like kinase 1 and placental growth factor on subsequent late first-trimester or second-trimester spontaneous abortion (n = 59) was evaluated using a Cox proportional hazards regression model, adjusting for body mass index, parity, season of blood sampling, and age. Furthermore, receiver operating characteristics were employed to identify predictive values and optimal cut-off values. In the adjusted Cox regression analysis, increasing continuous concentrations of both soluble Fms-like kinase 1 and placental growth factor were significantly associated with a decreased hazard ratio for spontaneous abortion: soluble Fms-like kinase 1, 0.996 (95% confidence interval, 0.995-0.997), and placental growth factor, 0.89 (95% confidence interval, 0.86-0.93). When analyzed by receiver operating characteristic cut-offs, women with soluble Fms-like kinase 1 <742 pg/mL had an odds ratio for spontaneous abortion of 12.1 (95% confidence interval, 6.64-22.2), positive predictive value of 11.70%, negative predictive value of 98.90%, positive likelihood ratio of 3.64 (3.07-4.32), and negative likelihood ratio of 0.30 (0.19-0.48). For placental growth factor <19.7 pg/mL, odds ratio was 13.2 (7.09-24.4), positive predictive value was 11.80%, negative predictive value was 99.0%, positive likelihood ratio was 3.68 (3.12-4.34), and negative likelihood ratio was 0.28 (0.17-0.45). In the sensitivity analysis of 54 spontaneous abortions matched 1:4 to controls on gestational age at blood sampling, the highest area under the curve was seen for soluble Fms-like kinase 1 in prediction of first-trimester spontaneous abortion, 0.898 (0.834-0.962), and at the optimum cut-off of 725 pg/mL, negative predictive value was 51.4%, positive predictive value was 94.6%, positive likelihood ratio was 4.04 (2.57-6.35), and negative likelihood ratio was 0.22 (0.09-0.54). A strong, novel prospective association was identified between lower concentrations of soluble Fms-like kinase 1 and placental growth factor measured in early pregnancy and spontaneous abortion. A soluble Fms-like kinase 1 cut-off <742 pg/mL in maternal serum was optimal to stratify women at high vs low risk of spontaneous abortion. The cause and effect of angiogenic factor alterations in spontaneous abortions remain to be elucidated. Copyright © 2016 Elsevier Inc. All rights reserved.
Durand, Eric; Bauer, Fabrice; Mansencal, Nicolas; Azarine, Arshid; Diebold, Benoit; Hagege, Albert; Perdrix, Ludivine; Gilard, Martine; Jobic, Yannick; Eltchaninoff, Hélène; Bensalah, Mourad; Dubourg, Benjamin; Caudron, Jérôme; Niarra, Ralph; Chatellier, Gilles; Dacher, Jean-Nicolas; Mousseaux, Elie
2017-08-15
To perform a head-to-head comparison of coronary CT angiography (CCTA) and dobutamine-stress echocardiography (DSE) in patients presenting recent chest pain when troponin and ECG are negative. Two hundred seventeen patients with recent chest pain, normal ECG findings, and negative troponin were prospectively included in this multicenter study and were scheduled for CCTA and DSE. Invasive coronary angiography (ICA), was performed in patients when either DSE or CCTA was considered positive or when both were non-contributive or in case of recurrent chest pain during 6month follow-up. The presence of coronary artery stenosis was defined as a luminal obstruction >50% diameter in any coronary segment at ICA. ICA was performed in 75 (34.6%) patients. Coronary artery stenosis was identified in 37 (17%) patients. For CCTA, the sensitivity was 96.9% (95% CI 83.4-99.9), specificity 48.3% (29.4-67.5), positive likelihood ratio 2.06 (95% CI 1.36-3.11), and negative likelihood ratio 0.07 (95% CI 0.01-0.52). The sensitivity of DSE was 51.6% (95% CI 33.1-69.9), specificity 46.7% (28.3-65.7), positive likelihood ratio 1.03 (95% CI 0.62-1.72), and negative likelihood ratio 1.10 (95% CI 0.63-1.93). The CCTA: DSE ratio of true-positive and false-positive rates was 1.70 (95% CI 1.65-1.75) and 1.00 (95% CI 0.91-1.09), respectively, when non-contributive CCTA and DSE were both considered positive. Only one missed acute coronary syndrome was observed at six months. CCTA has higher diagnostic performance than DSE in the evaluation of patients with recent chest pain, normal ECG findings, and negative troponine to exclude coronary artery disease. Copyright © 2017. Published by Elsevier B.V.
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
Davidov, Ori; Rosen, Sophia
2011-04-01
In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Relationship Formation and Stability in Emerging Adulthood: Do Sex Ratios Matter?
ERIC Educational Resources Information Center
Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.
2011-01-01
Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage (transitions in and out of dating or cohabiting relationships) is unknown. Utilizing data from the Toledo Adolescent Relationships Study and the 2000 U.S. Census, this study assesses whether sex ratios…
Human variability in mercury toxicokinetics and steady state biomarker ratios.
Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M
2000-10-01
Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
A model for evidence accumulation in the lexical decision task.
Wagenmakers, Eric-Jan; Steyvers, Mark; Raaijmakers, Jeroen G W; Shiffrin, Richard M; van Rijn, Hedderik; Zeelenberg, René
2004-05-01
We present a new model for lexical decision, REM-LD, that is based on REM theory (e.g., ). REM-LD uses a principled (i.e., Bayes' rule) decision process that simultaneously considers the diagnosticity of the evidence for the 'WORD' response and the 'NONWORD' response. The model calculates the odds ratio that the presented stimulus is a word or a nonword by averaging likelihood ratios for lexical entries from a small neighborhood of similar words. We report two experiments that used a signal-to-respond paradigm to obtain information about the time course of lexical processing. Experiment 1 verified the prediction of the model that the frequency of the word stimuli affects performance for nonword stimuli. Experiment 2 was done to study the effects of nonword lexicality, word frequency, and repetition priming and to demonstrate how REM-LD can account for the observed results. We discuss how REM-LD could be extended to account for effects of phonology such as the pseudohomophone effect, and how REM-LD can predict response times in the traditional 'respond-when-ready' paradigm.
Implementation and performance evaluation of acoustic denoising algorithms for UAV
NASA Astrophysics Data System (ADS)
Chowdhury, Ahmed Sony Kamal
Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Validation of the diagnostic score for acute lower abdominal pain in women of reproductive age.
Jearwattanakanok, Kijja; Yamada, Sirikan; Suntornlimsiri, Watcharin; Smuthtai, Waratsuda; Patumanond, Jayanton
2014-01-01
Background. The differential diagnoses of acute appendicitis obstetrics, and gynecological conditions (OB-GYNc) or nonspecific abdominal pain in young adult females with lower abdominal pain are clinically challenging. The present study aimed to validate the recently developed clinical score for the diagnosis of acute lower abdominal pain in female of reproductive age. Method. Medical records of reproductive age women (15-50 years) who were admitted for acute lower abdominal pain were collected. Validation data were obtained from patients admitted during a different period from the development data. Result. There were 302 patients in the validation cohort. For appendicitis, the score had a sensitivity of 91.9%, a specificity of 79.0%, and a positive likelihood ratio of 4.39. The sensitivity, specificity, and positive likelihood ratio in diagnosis of OB-GYNc were 73.0%, 91.6%, and 8.73, respectively. The areas under the receiver operating curves (ROC), the positive likelihood ratios, for appendicitis and OB-GYNc in the validation data were not significantly different from the development data, implying similar performances. Conclusion. The clinical score developed for the diagnosis of acute lower abdominal pain in female of reproductive age may be applied to guide differential diagnoses in these patients.
Norström, Madelaine; Kristoffersen, Anja Bråthen; Görlach, Franziska Sophie; Nygård, Karin; Hopp, Petter
2015-01-01
In order to facilitate foodborne outbreak investigations there is a need to improve the methods for identifying the food products that should be sampled for laboratory analysis. The aim of this study was to examine the applicability of a likelihood ratio approach previously developed on simulated data, to real outbreak data. We used human case and food product distribution data from the Norwegian enterohaemorrhagic Escherichia coli outbreak in 2006. The approach was adjusted to include time, space smoothing and to handle missing or misclassified information. The performance of the adjusted likelihood ratio approach on the data originating from the HUS outbreak and control data indicates that the adjusted approach is promising and indicates that the adjusted approach could be a useful tool to assist and facilitate the investigation of food borne outbreaks in the future if good traceability are available and implemented in the distribution chain. However, the approach needs to be further validated on other outbreak data and also including other food products than meat products in order to make a more general conclusion of the applicability of the developed approach. PMID:26237468
Mapping Quantitative Traits in Unselected Families: Algorithms and Examples
Dupuis, Josée; Shi, Jianxin; Manning, Alisa K.; Benjamin, Emelia J.; Meigs, James B.; Cupples, L. Adrienne; Siegmund, David
2009-01-01
Linkage analysis has been widely used to identify from family data genetic variants influencing quantitative traits. Common approaches have both strengths and limitations. Likelihood ratio tests typically computed in variance component analysis can accommodate large families but are highly sensitive to departure from normality assumptions. Regression-based approaches are more robust but their use has primarily been restricted to nuclear families. In this paper, we develop methods for mapping quantitative traits in moderately large pedigrees. Our methods are based on the score statistic which in contrast to the likelihood ratio statistic, can use nonparametric estimators of variability to achieve robustness of the false positive rate against departures from the hypothesized phenotypic model. Because the score statistic is easier to calculate than the likelihood ratio statistic, our basic mapping methods utilize relatively simple computer code that performs statistical analysis on output from any program that computes estimates of identity-by-descent. This simplicity also permits development and evaluation of methods to deal with multivariate and ordinal phenotypes, and with gene-gene and gene-environment interaction. We demonstrate our methods on simulated data and on fasting insulin, a quantitative trait measured in the Framingham Heart Study. PMID:19278016
Loneliness and social isolation as risk factors for mortality: a meta-analytic review.
Holt-Lunstad, Julianne; Smith, Timothy B; Baker, Mark; Harris, Tyler; Stephenson, David
2015-03-01
Actual and perceived social isolation are both associated with increased risk for early mortality. In this meta-analytic review, our objective is to establish the overall and relative magnitude of social isolation and loneliness and to examine possible moderators. We conducted a literature search of studies (January 1980 to February 2014) using MEDLINE, CINAHL, PsycINFO, Social Work Abstracts, and Google Scholar. The included studies provided quantitative data on mortality as affected by loneliness, social isolation, or living alone. Across studies in which several possible confounds were statistically controlled for, the weighted average effect sizes were as follows: social isolation odds ratio (OR) = 1.29, loneliness OR = 1.26, and living alone OR = 1.32, corresponding to an average of 29%, 26%, and 32% increased likelihood of mortality, respectively. We found no differences between measures of objective and subjective social isolation. Results remain consistent across gender, length of follow-up, and world region, but initial health status has an influence on the findings. Results also differ across participant age, with social deficits being more predictive of death in samples with an average age younger than 65 years. Overall, the influence of both objective and subjective social isolation on risk for mortality is comparable with well-established risk factors for mortality. © The Author(s) 2015.
Costa, Rui Miguel; Miller, Geoffrey F; Brody, Stuart
2012-12-01
Research indicates that (i) women's orgasm during penile-vaginal intercourse (PVI) is influenced by fitness-related male partner characteristics, (ii) penis size is important for many women, and (iii) preference for a longer penis is associated with greater vaginal orgasm consistency (triggered by PVI without concurrent clitoral masturbation). To test the hypothesis that vaginal orgasm frequency is associated with women's reporting that a longer than average penis is more likely to provoke their PVI orgasm. Three hundred twenty-three women reported in an online survey their past month frequency of various sexual behaviors (including PVI, vaginal orgasm, and clitoral orgasm), the effects of a longer than average penis on likelihood of orgasm from PVI, and the importance they attributed to PVI and to noncoital sex. Univariate analyses of covariance with dependent variables being frequencies of various sexual behaviors and types of orgasm and with independent variable being women reporting vs. not reporting that a longer than average penis is important for their orgasm from PVI. Likelihood of orgasm with a longer penis was related to greater vaginal orgasm frequency but unrelated to frequencies of other sexual behaviors, including clitoral orgasm. In binary logistic regression, likelihood of orgasm with a longer penis was related to greater importance attributed to PVI and lesser importance attributed to noncoital sex. Women who prefer deeper penile-vaginal stimulation are more likely to have vaginal orgasm, consistent with vaginal orgasm evolving as part of a female mate choice system favoring somewhat larger than average penises. Future research could extend the findings by overcoming limitations related to more precise measurement of penis length (to the pubis and pressed close to the pubic bone) and girth, and large representative samples. Future experimental research might assess to what extent different penis sizes influence women's satisfaction and likelihood of vaginal orgasm. © 2012 International Society for Sexual Medicine.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
UWB pulse detection and TOA estimation using GLRT
NASA Astrophysics Data System (ADS)
Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.
2017-12-01
In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.
Decision Fusion with Channel Errors in Distributed Decode-Then-Fuse Sensor Networks
Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Zhong, Xionghu
2015-01-01
Decision fusion for distributed detection in sensor networks under non-ideal channels is investigated in this paper. Usually, the local decisions are transmitted to the fusion center (FC) and decoded, and a fusion rule is then applied to achieve a global decision. We propose an optimal likelihood ratio test (LRT)-based fusion rule to take the uncertainty of the decoded binary data due to modulation, reception mode and communication channel into account. The average bit error rate (BER) is employed to characterize such an uncertainty. Further, the detection performance is analyzed under both non-identical and identical local detection performance indices. In addition, the performance of the proposed method is compared with the existing optimal and suboptimal LRT fusion rules. The results show that the proposed fusion rule is more robust compared to these existing ones. PMID:26251908
Prediction of hamstring injury in professional soccer players by isokinetic measurements
Dauty, Marc; Menu, Pierre; Fouasson-Chailloux, Alban; Ferréol, Sophie; Dubois, Charles
2016-01-01
Summary Objectives previous studies investigating the ability of isokinetic strength ratios to predict hamstring injuries in soccer players have reported conflicting results. Hypothesis to determine if isokinetic ratios are able to predict hamstring injury occurring during the season in professional soccer players. Study Design case-control study; Level of evidence: 3. Methods from 2001 to 2011, 350 isokinetic tests were performed in 136 professional soccer players at the beginning of the soccer season. Fifty-seven players suffered hamstring injury during the season that followed the isokinetic tests. These players were compared with the 79 uninjured players. The bilateral concentric ratio (hamstring-to-hamstring), ipsilateral concentric ratio (hamstring-to-quadriceps), and mixed ratio (eccentric/concentric hamstring-to-quadriceps) were studied. The predictive ability of each ratio was established based on the likelihood ratio and post-test probability. Results the mixed ratio (30 eccentric/240 concentric hamstring-to-quadriceps) <0.8, ipsilateral ratio (180 concentric hamstring-to-quadriceps) <0.47, and bilateral ratio (60 concentric hamstring-to-hamstring) <0.85 were the most predictive of hamstring injury. The ipsilateral ratio <0.47 allowed prediction of the severity of the hamstring injury, and was also influenced by the length of time since administration of the isokinetic tests. Conclusion isokinetic ratios are useful for predicting the likelihood of hamstring injury in professional soccer players during the competitive season. PMID:27331039
Diagnostic capability of spectral-domain optical coherence tomography for glaucoma.
Wu, Huijuan; de Boer, Johannes F; Chen, Teresa C
2012-05-01
To determine the diagnostic capability of spectral-domain optical coherence tomography in glaucoma patients with visual field defects. Prospective, cross-sectional study. Participants were recruited from a university hospital clinic. One eye of 85 normal subjects and 61 glaucoma patients with average visual field mean deviation of -9.61 ± 8.76 dB was selected randomly for the study. A subgroup of the glaucoma patients with early visual field defects was calculated separately. Spectralis optical coherence tomography (Heidelberg Engineering, Inc) circular scans were performed to obtain peripapillary retinal nerve fiber layer (RNFL) thicknesses. The RNFL diagnostic parameters based on the normative database were used alone or in combination for identifying glaucomatous RNFL thinning. To evaluate diagnostic performance, calculations included areas under the receiver operating characteristic curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio. Overall RNFL thickness had the highest area under the receiver operating characteristic curve values: 0.952 for all patients and 0.895 for the early glaucoma subgroup. For all patients, the highest sensitivity (98.4%; 95% confidence interval, 96.3% to 100%) was achieved by using 2 criteria: ≥ 1 RNFL sectors being abnormal at the < 5% level and overall classification of borderline or outside normal limits, with specificities of 88.9% (95% confidence interval, 84.0% to 94.0%) and 87.1% (95% confidence interval, 81.6% to 92.5%), respectively, for these 2 criteria. Statistical parameters for evaluating the diagnostic performance of the Spectralis spectral-domain optical coherence tomography were good for early perimetric glaucoma and were excellent for moderately advanced perimetric glaucoma. Copyright © 2012 Elsevier Inc. All rights reserved.
Likelihood ratio-based integrated personal risk assessment of type 2 diabetes.
Sato, Noriko; Htun, Nay Chi; Daimon, Makoto; Tamiya, Gen; Kato, Takeo; Kubota, Isao; Ueno, Yoshiyuki; Yamashita, Hidetoshi; Fukao, Akira; Kayama, Takamasa; Muramatsu, Masaaki
2014-01-01
To facilitate personalized health care for multifactorial diseases, risks of genetic and clinical/environmental factors should be assessed together for each individual in an integrated fashion. This approach is possible with the likelihood ratio (LR)-based risk assessment system, as this system can incorporate manifold tests. We examined the usefulness of this system for assessing type 2 diabetes (T2D). Our system employed 29 genetic susceptibility variants, body mass index (BMI), and hypertension as risk factors whose LRs can be estimated from openly available T2D association data for the Japanese population. The pretest probability was set at a sex- and age-appropriate population average of diabetes prevalence. The classification performance of our LR-based risk assessment was compared to that of a non-invasive screening test for diabetes called TOPICS (with score based on age, sex, family history, smoking, BMI, and hypertension) using receiver operating characteristic analysis with a community cohort (n = 1263). The area under the receiver operating characteristic curve (AUC) for the LR-based assessment and TOPICS was 0.707 (95% CI 0.665-0.750) and 0.719 (0.675-0.762), respectively. These AUCs were much higher than that of a genetic risk score constructed using the same genetic susceptibility variants, 0.624 (0.574-0.674). The use of ethnically matched LRs is necessary for proper personal risk assessment. In conclusion, although LR-based integrated risk assessment for T2D still requires additional tests that evaluate other factors, such as risks involved in missing heritability, our results indicate the potential usability of LR-based assessment system and stress the importance of stratified epidemiological investigations in personalized medicine.
How well do commonly used data presentation formats support comparative effectiveness evaluations?
Dolan, James G.; Qian, Feng; Veazie, Peter J.
2012-01-01
Background Good decisions depend on an accurate understanding of the comparative effectiveness of decision alternatives. The best way convey data needed to support these comparisons is unknown. Objective To determine how well five commonly used data presentation formats convey comparative effectiveness information. Design Internet survey using a factorial design. Subjects 279 members of an online survey panel. Intervention Study participants compared outcomes associated with three hypothetical screening test options relative to five possible outcomes with probabilities ranging from 2 per 5,000 (0.04%) to 500 per 1,000 (50%). Data presentation formats included a table, a “magnified” bar chart, a risk scale, a frequency diagram, and an icon array. Measurements Outcomes included the number of correct ordinal judgments regarding the more likely of two outcomes, the ratio of perceived versus actual relative likelihoods of the paired outcomes, the inter-subject consistency of responses, and perceived clarity. Results The mean number of correct ordinal judgments was 12 of 15 (80%), with no differences among data formats. On average, there was a 3.3-fold difference between perceived and actual likelihood ratios,95%CI: 3.0 to 3.6. Comparative judgments based on flow charts, icon arrays, and tables were all significantly more accurate and consistent than those based on risk scales and bar charts, p < 0.001. The most clearly perceived formats were the table and the flow chart. Low subjective numeracy was associated with less accurate and more variable data interpretations and lower perceived clarity for icon displays, bar charts, and flow diagrams. Conclusions None of the data presentation formats studied can reliably provide patients, especially those with low subjective numeracy, with an accurate understanding of comparative effectiveness information. PMID:22618998
How well do commonly used data presentation formats support comparative effectiveness evaluations?
Dolan, James G; Qian, Feng; Veazie, Peter J
2012-01-01
Good decisions depend on an accurate understanding of the comparative effectiveness of decision alternatives. The best way to convey data needed to support these comparisons is unknown. To determine how well 5 commonly used data presentation formats convey comparative effectiveness information. The study was an Internet survey using a factorial design. Participants consisted of 279 members of an online survey panel. Study participants compared outcomes associated with 3 hypothetical screening test options relative to 5 possible outcomes with probabilities ranging from 2 per 5000 (0.04%) to 500 per 1000 (50%). Data presentation formats included a table, a "magnified" bar chart, a risk scale, a frequency diagram, and an icon array. Outcomes included the number of correct ordinal judgments regarding the more likely of 2 outcomes, the ratio of perceived versus actual relative likelihoods of the paired outcomes, the intersubject consistency of responses, and perceived clarity. The mean number of correct ordinal judgments was 12 of 15 (80%), with no differences among data formats. On average, there was a 3.3-fold difference between perceived and actual likelihood ratios (95% confidence interval = 3.0-3.6). Comparative judgments based on flowcharts, icon arrays, and tables were all significantly more accurate and consistent than those based on risk scales and bar charts (P < 0.001). The most clearly perceived formats were the table and the flowchart. Low subjective numeracy was associated with less accurate and more variable data interpretations and lower perceived clarity for icon displays, bar charts, and flow diagrams. None of the data presentation formats studied can reliably provide patients, especially those with low subjective numeracy, with an accurate understanding of comparative effectiveness information.
Xu, Stanley; Hambidge, Simon J; McClure, David L; Daley, Matthew F; Glanz, Jason M
2013-08-30
In the examination of the association between vaccines and rare adverse events after vaccination in postlicensure observational studies, it is challenging to define appropriate risk windows because prelicensure RCTs provide little insight on the timing of specific adverse events. Past vaccine safety studies have often used prespecified risk windows based on prior publications, biological understanding of the vaccine, and expert opinion. Recently, a data-driven approach was developed to identify appropriate risk windows for vaccine safety studies that use the self-controlled case series design. This approach employs both the maximum incidence rate ratio and the linear relation between the estimated incidence rate ratio and the inverse of average person time at risk, given a specified risk window. In this paper, we present a scan statistic that can identify appropriate risk windows in vaccine safety studies using the self-controlled case series design while taking into account the dependence of time intervals within an individual and while adjusting for time-varying covariates such as age and seasonality. This approach uses the maximum likelihood ratio test based on fixed-effects models, which has been used for analyzing data from self-controlled case series design in addition to conditional Poisson models. Copyright © 2013 John Wiley & Sons, Ltd.
Aiken, Linda H; Sloane, Douglas M; Bruyneel, Luk; Van den Heede, Koen; Griffiths, Peter; Busse, Reinhard; Diomidous, Marianna; Kinnunen, Juha; Kózka, Maria; Lesaffre, Emmanuel; McHugh, Matthew D; Moreno-Casbas, M T; Rafferty, Anne Marie; Schwendimann, Rene; Scott, P Anne; Tishelman, Carol; van Achterberg, Theo; Sermeus, Walter
2014-01-01
Summary Background Austerity measures and health-system redesign to minimise hospital expenditures risk adversely affecting patient outcomes. The RN4CAST study was designed to inform decision making about nursing, one of the largest components of hospital operating expenses. We aimed to assess whether differences in patient to nurse ratios and nurses’ educational qualifications in nine of the 12 RN4CAST countries with similar patient discharge data were associated with variation in hospital mortality after common surgical procedures. Methods For this observational study, we obtained discharge data for 422 730 patients aged 50 years or older who underwent common surgeries in 300 hospitals in nine European countries. Administrative data were coded with a standard protocol (variants of the ninth or tenth versions of the International Classification of Diseases) to estimate 30 day in-hospital mortality by use of risk adjustment measures including age, sex, admission type, 43 dummy variables suggesting surgery type, and 17 dummy variables suggesting comorbidities present at admission. Surveys of 26 516 nurses practising in study hospitals were used to measure nurse staffing and nurse education. We used generalised estimating equations to assess the effects of nursing factors on the likelihood of surgical patients dying within 30 days of admission, before and after adjusting for other hospital and patient characteristics. Findings An increase in a nurses’ workload by one patient increased the likelihood of an inpatient dying within 30 days of admission by 7% (odds ratio 1·068, 95% CI 1·031–1·106), and every 10% increase in bachelor’s degree nurses was associated with a decrease in this likelihood by 7% (0·929, 0·886–0·973). These associations imply that patients in hospitals in which 60% of nurses had bachelor’s degrees and nurses cared for an average of six patients would have almost 30% lower mortality than patients in hospitals in which only 30% of nurses had bachelor’s degrees and nurses cared for an average of eight patients. Interpretation Nurse staffing cuts to save money might adversely affect patient outcomes. An increased emphasis on bachelor’s education for nurses could reduce preventable hospital deaths. Funding European Union’s Seventh Framework Programme, National Institute of Nursing Research, National Institutes of Health, the Norwegian Nurses Organisation and the Norwegian Knowledge Centre for the Health Services, Swedish Association of Health Professionals, the regional agreement on medical training and clinical research between Stockholm County Council and Karolinska Institutet, Committee for Health and Caring Sciences and Strategic Research Program in Care Sciences at Karolinska Institutet, Spanish Ministry of Science and Innovation. PMID:24581683
Aiken, Linda H; Sloane, Douglas M; Bruyneel, Luk; Van den Heede, Koen; Griffiths, Peter; Busse, Reinhard; Diomidous, Marianna; Kinnunen, Juha; Kózka, Maria; Lesaffre, Emmanuel; McHugh, Matthew D; Moreno-Casbas, M T; Rafferty, Anne Marie; Schwendimann, Rene; Scott, P Anne; Tishelman, Carol; van Achterberg, Theo; Sermeus, Walter
2014-05-24
Austerity measures and health-system redesign to minimise hospital expenditures risk adversely affecting patient outcomes. The RN4CAST study was designed to inform decision making about nursing, one of the largest components of hospital operating expenses. We aimed to assess whether differences in patient to nurse ratios and nurses' educational qualifications in nine of the 12 RN4CAST countries with similar patient discharge data were associated with variation in hospital mortality after common surgical procedures. For this observational study, we obtained discharge data for 422,730 patients aged 50 years or older who underwent common surgeries in 300 hospitals in nine European countries. Administrative data were coded with a standard protocol (variants of the ninth or tenth versions of the International Classification of Diseases) to estimate 30 day in-hospital mortality by use of risk adjustment measures including age, sex, admission type, 43 dummy variables suggesting surgery type, and 17 dummy variables suggesting comorbidities present at admission. Surveys of 26,516 nurses practising in study hospitals were used to measure nurse staffing and nurse education. We used generalised estimating equations to assess the effects of nursing factors on the likelihood of surgical patients dying within 30 days of admission, before and after adjusting for other hospital and patient characteristics. An increase in a nurses' workload by one patient increased the likelihood of an inpatient dying within 30 days of admission by 7% (odds ratio 1·068, 95% CI 1·031-1·106), and every 10% increase in bachelor's degree nurses was associated with a decrease in this likelihood by 7% (0·929, 0·886-0·973). These associations imply that patients in hospitals in which 60% of nurses had bachelor's degrees and nurses cared for an average of six patients would have almost 30% lower mortality than patients in hospitals in which only 30% of nurses had bachelor's degrees and nurses cared for an average of eight patients. Nurse staffing cuts to save money might adversely affect patient outcomes. An increased emphasis on bachelor's education for nurses could reduce preventable hospital deaths. European Union's Seventh Framework Programme, National Institute of Nursing Research, National Institutes of Health, the Norwegian Nurses Organisation and the Norwegian Knowledge Centre for the Health Services, Swedish Association of Health Professionals, the regional agreement on medical training and clinical research between Stockholm County Council and Karolinska Institutet, Committee for Health and Caring Sciences and Strategic Research Program in Care Sciences at Karolinska Institutet, Spanish Ministry of Science and Innovation. Copyright © 2014 Elsevier Ltd. All rights reserved.
On the occurrence of false positives in tests of migration under an isolation with migration model
Hey, Jody; Chung, Yujin; Sethuraman, Arun
2015-01-01
The population genetic study of divergence is often done using a Bayesian genealogy sampler, like those implemented in IMa2 and related programs, and these analyses frequently include a likelihood-ratio test of the null hypothesis of no migration between populations. Cruickshank and Hahn (2014, Molecular Ecology, 23, 3133–3157) recently reported a high rate of false positive test results with IMa2 for data simulated with small numbers of loci under models with no migration and recent splitting times. We confirm these findings and discover that they are caused by a failure of the assumptions underlying likelihood ratio tests that arises when using marginal likelihoods for a subset of model parameters. We also show that for small data sets, with little divergence between samples from two populations, an excellent fit can often be found by a model with a low migration rate and recent splitting time and a model with a high migration rate and a deep splitting time. PMID:26456794
Clinical Evaluation and Physical Exam Findings in Patients with Anterior Shoulder Instability.
Lizzio, Vincent A; Meta, Fabien; Fidai, Mohsin; Makhni, Eric C
2017-12-01
The goal of this paper is to provide an overview in evaluating the patient with suspected or known anteroinferior glenohumeral instability. There is a high rate of recurrent subluxations or dislocations in young patients with history of anterior shoulder dislocation, and recurrent instability will increase likelihood of further damage to the glenohumeral joint. Proper identification and treatment of anterior shoulder instability can dramatically reduce the rate of recurrent dislocation and prevent subsequent complications. Overall, the anterior release or surprise test demonstrates the best sensitivity and specificity for clinically diagnosing anterior shoulder instability, although other tests also have favorable sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and inter-rater reliabilities. Anterior shoulder instability is a relatively common injury in the young and athletic population. The combination of history and performing apprehension, relocation, release or surprise, anterior load, and anterior drawer exam maneuvers will optimize sensitivity and specificity for accurately diagnosing anterior shoulder instability in clinical practice.
Genetic modelling of test day records in dairy sheep using orthogonal Legendre polynomials.
Kominakis, A; Volanis, M; Rogdakis, E
2001-03-01
Test day milk yields of three lactations in Sfakia sheep were analyzed fitting a random regression (RR) model, regressing on orthogonal polynomials of the stage of the lactation period, i.e. days in milk. Univariate (UV) and multivariate (MV) analyses were also performed for four stages of the lactation period, represented by average days in milk, i.e. 15, 45, 70 and 105 days, to compare estimates obtained from RR models with estimates from UV and MV analyses. The total number of test day records were 790, 1314 and 1041 obtained from 214, 342 and 303 ewes in the first, second and third lactation, respectively. Error variances and covariances between regression coefficients were estimated by restricted maximum likelihood. Models were compared using likelihood ratio tests (LRTs). Log likelihoods were not significantly reduced when the rank of the orthogonal Legendre polynomials (LPs) of lactation stage was reduced from 4 to 2 and homogenous variances for lactation stages within lactations were considered. Mean weighted heritability estimates with RR models were 0.19, 0.09 and 0.08 for first, second and third lactation, respectively. The respective estimates obtained from UV analyses were 0.14, 0.12 and 0.08, respectively. Mean permanent environmental variance, as a proportion of the total, was high at all stages and lactations ranging from 0.54 to 0.71. Within lactations, genetic and permanent environmental correlations between lactation stages were in the range from 0.36 to 0.99 and 0.76 to 0.99, respectively. Genetic parameters for additive genetic and permanent environmental effects obtained from RR models were different from those obtained from UV and MV analyses.
Olsen, Jonathan R; Mitchell, Richard; Mutrie, Nanette; Foley, Louise; Ogilvie, David
2017-12-01
This study aimed to describe active travel (walking or cycling) in Scotland and explore potential demographic, geographic, and socio-economic inequalities in active travel. We extracted data for the period 2012-13 (39,585 journey stages) from the Scottish Household Survey. Survey travel diaries recorded all journeys made on the previous day by sampled individuals aged 16 + living within Scotland, and the stages within each journey. Descriptive statistics were calculated for journey stages, mode, purpose and distance. Logistic regression models were fitted to examine the relationship between the likelihood of a journey stage being active, age, sex, area deprivation and urban/rural classification. A quarter of all journey stages were walked or cycled (26%, n: 10,280/39,585); 96% of these were walked. Those living in the least deprived areas travelled a greater average distance per active journey stage than those in the most deprived. The likelihood of an active journey stage was higher for those living in the most deprived areas than for those in the least deprived (Odds Ratio (OR) 1.21, 95% CI 1.04-1.41) and for those in younger compared to older age groups (OR 0.44, 95% CI 0.34-0.58). In conclusion, socio-economic inequalities in active travel were identified, but - contrary to the trends for many health-beneficial behaviours - with a greater likelihood of active travel in more deprived areas. This indicates a potential contribution to protecting and improving health for those whose health status tends to be worse. Walking was the most common mode of active travel, and should be promoted as much as cycling.
Elliston, Katherine G; Ferguson, Stuart G; Schüz, Natalie; Schüz, Benjamin
2017-04-01
Individual eating behavior is a risk factor for obesity and highly dependent on internal and external cues. Many studies also suggest that the food environment (i.e., food outlets) influences eating behavior. This study therefore examines the momentary food environment (at the time of eating) and the role of cues simultaneously in predicting everyday eating behavior in adults with overweight and obesity. Intensive longitudinal study using ecological momentary assessment (EMA) over 14 days in 51 adults with overweight and obesity (average body mass index = 30.77; SD = 4.85) with a total of 745 participant days of data. Multiple daily assessments of eating (meals, high- or low-energy snacks) and randomly timed assessments. Cues and the momentary food environment were assessed during both assessment types. Random effects multinomial logistic regression shows that both internal (affect) and external (food availability, social situation, observing others eat) cues were associated with increased likelihood of eating. The momentary food environment predicted meals and snacking on top of cues, with a higher likelihood of high-energy snacks when fast food restaurants were close by (odds ratio [OR] = 1.89, 95% confidence interval [CI] = 1.22, 2.93) and a higher likelihood of low-energy snacks in proximity to supermarkets (OR = 2.29, 95% CI = 1.38, 3.82). Real-time eating behavior, both in terms of main meals and snacks, is associated with internal and external cues in adults with overweight and obesity. In addition, perceptions of the momentary food environment influence eating choices, emphasizing the importance of an integrated perspective on eating behavior and obesity prevention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Analyzing Personalized Policies for Online Biometric Verification
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M.
2014-01-01
Motivated by India’s nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident’s biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India’s program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India’s biometric program. The mean delay is sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32–41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident. PMID:24787752
Analyzing personalized policies for online biometric verification.
Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M
2014-01-01
Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.
Schweitzer, Cedric; Korobelnik, Jean-Francois; Le Goff, Melanie; Rahimian, Olivier; Malet, Florence; Rougier, Marie-Benedicte; Delyfer, Marie-Noelle; Dartigues, Jean-Francois; Delcourt, Cecile
2016-11-01
To assess diagnostic accuracy of spectral-domain optical coherence tomography (SD-OCT) to discriminate glaucoma and control subjects in an elderly population. The antioxidants, essential lipids, nutrition and ocular maladies study (ALIENOR: "Antioxydants, Lipides Essentiels, Nutrition et Maladies Oculaires") is a population-based study. From 2009 to 2010, a total of 624 subjects, aged 74 years or older underwent a complete eye examination, including optic disc color photography and SD-OCT examination of the macula and the optic nerve head. Glaucoma diagnosis was made using retinophotography of the optic nerve head and International Society for Epidemiologic and Geographical Ophthalmology criteria. Average and sectorial peripapillary retinal nerve fiber layer thicknesses (RNFLT) were compared between glaucoma and control subjects using area under the receiver operating characteristic curves (AUC), positive and negative likelihood ratios (LR+/LR-), and diagnostic odds ratios (DOR). A total of 532 subjects had complete data, 492 were classified as controls and 40 were classified as glaucoma. Mean age was 82.1 ± 4.2 years and average RNFLT was significantly different between both groups (controls: 88.7 ± 12.2 μm, glaucoma: 65.4 ± 14.4 μm, P < 0.001). Highest AUC values were observed for average (0.895), temporal-inferior (0.874), and temporal-superior (0.868) RNFLT. Temporal-superior RNFLT had the highest DOR (25.31; LR+, 4.65; LR-, 0.18), followed by average RNFLT (DOR: 24.80; LR+, 6.36; LR-, 0.26). When using the normative database provided by the machine, DOR increased to 31.03 (LR+, 1.75; LR-, 0.06) if at least one parameter was considered abnormal (at P < 0.05). Parameters of SD-OCT RNFL may provide valuable information in a screening strategy to improve glaucoma detection in a general population of elderly people.
Gan, Fah Fatt; Tang, Xu; Zhu, Yexin; Lim, Puay Weng
2017-06-01
The traditional variable life-adjusted display (VLAD) is a graphical display of the difference between expected and actual cumulative deaths. The VLAD assumes binary outcomes: death within 30 days of an operation or survival beyond 30 days. Full recovery and bedridden for life, for example, are considered the same outcome. This binary classification results in a great loss of information. Although there are many grades of survival, the binary outcomes are commonly used to classify surgical outcomes. Consequently, quality monitoring procedures are developed based on binary outcomes. With a more refined set of outcomes, the sensitivities of these procedures can be expected to improve. A likelihood ratio method is used to define a penalty-reward scoring system based on three or more surgical outcomes for the new VLAD. The likelihood ratio statistic W is based on testing the odds ratio of cumulative probabilities of recovery R. Two methods of implementing the new VLAD are proposed. We accumulate the statistic W-W¯R to estimate the performance of a surgeon where W¯R is the average of the W's of a historical data set. The accumulated sum will be zero based on the historical data set. This ensures that if a new VLAD is plotted for a future surgeon of performance similar to this average performance, the plot will exhibit a horizontal trend. For illustration of the new VLAD, we consider 3-outcome surgical results: death within 30 days, partial and full recoveries. In our first illustration, we show the effect of partial recoveries on surgical results of a surgeon. In our second and third illustrations, the surgical results of two surgeons are compared using both the traditional VLAD based on binary-outcome data and the new VLAD based on 3-outcome data. A reversal in relative performance of surgeons is observed when the new VLAD is used. In our final illustration, we display the surgical results of four surgeons using the new VLAD based completely on 3-outcome data. Full recovery and bedridden for life are two completely different outcomes. There is a great loss of information when different grades of 'successful' operations are naively classified as survival. When surgical outcomes are classified more accurately into more than two categories, the resulting new VLAD will reveal more accurately and fairly the surgical results. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Subjective global assessment of nutritional status in children.
Mahdavi, Aida Malek; Ostadrahimi, Alireza; Safaiyan, Abdolrasool
2010-10-01
This study was aimed to compare the subjective and objective nutritional assessments and to analyse the performance of subjective global assessment (SGA) of nutritional status in diagnosing undernutrition in paediatric patients. One hundred and forty children (aged 2-12 years) hospitalized consecutively in Tabriz Paediatric Hospital from June 2008 to August 2008 underwent subjective assessment using the SGA questionnaire and objective assessment, including anthropometric and biochemical measurements. Agreement between two assessment methods was analysed by the kappa (κ) statistic. Statistical indicators including (sensitivity, specificity, predictive values, error rates, accuracy, powers, likelihood ratios and odds ratio) between SGA and objective assessment method were determined. The overall prevalence of undernutrition according to the SGA (70.7%) was higher than that by objective assessment of nutritional status (48.5%). Agreement between the two evaluation methods was only fair to moderate (κ = 0.336, P < 0.001). The sensitivity, specificity, positive and negative predictive value of the SGA method for screening undernutrition in this population were 88.235%, 45.833%, 60.606% and 80.487%, respectively. Accuracy, positive and negative power of the SGA method were 66.428%, 56.074% and 41.25%, respectively. Likelihood ratio positive, likelihood ratio negative and odds ratio of the SGA method were 1.628, 0.256 and 6.359, respectively. Our findings indicated that in assessing nutritional status of children, there is not a good level of agreement between SGA and objective nutritional assessment. In addition, SGA is a highly sensitive tool for assessing nutritional status and could identify children at risk of developing undernutrition. © 2009 Blackwell Publishing Ltd.
Polcari, J.
2013-08-16
The signal processing concept of signal-to-noise ratio (SNR), in its role as a performance measure, is recast within the more general context of information theory, leading to a series of useful insights. Establishing generalized SNR (GSNR) as a rigorous information theoretic measure inherent in any set of observations significantly strengthens its quantitative performance pedigree while simultaneously providing a specific definition under general conditions. This directly leads to consideration of the log likelihood ratio (LLR): first, as the simplest possible information-preserving transformation (i.e., signal processing algorithm) and subsequently, as an absolute, comparable measure of information for any specific observation exemplar. Furthermore,more » the information accounting methodology that results permits practical use of both GSNR and LLR as diagnostic scalar performance measurements, directly comparable across alternative system/algorithm designs, applicable at any tap point within any processing string, in a form that is also comparable with the inherent performance bounds due to information conservation.« less
Nasal Airway Microbiota Profile and Severe Bronchiolitis in Infants: A Case-control Study.
Hasegawa, Kohei; Linnemann, Rachel W; Mansbach, Jonathan M; Ajami, Nadim J; Espinola, Janice A; Petrosino, Joseph F; Piedra, Pedro A; Stevenson, Michelle D; Sullivan, Ashley F; Thompson, Amy D; Camargo, Carlos A
2017-11-01
Little is known about the relationship of airway microbiota with bronchiolitis in infants. We aimed to identify nasal airway microbiota profiles and to determine their association with the likelihood of bronchiolitis in infants. A case-control study was conducted. As a part of a multicenter prospective study, we collected nasal airway samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 110 age-matched healthy controls. By applying 16S ribosomal RNA gene sequencing and an unbiased clustering approach to these 150 nasal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. Overall, the median age was 3 months and 56% were male. Unbiased clustering of airway microbiota identified 4 distinct profiles: Moraxella-dominant profile (37%), Corynebacterium/Dolosigranulum-dominant profile (27%), Staphylococcus-dominant profile (15%) and mixed profile (20%). Proportion of bronchiolitis was lowest in infants with Moraxella-dominant profile (14%) and highest in those with Staphylococcus-dominant profile (57%), corresponding to an odds ratio of 7.80 (95% confidence interval, 2.64-24.9; P < 0.001). In the multivariable model, the association between Staphylococcus-dominant profile and greater likelihood of bronchiolitis persisted (odds ratio for comparison with Moraxella-dominant profile, 5.16; 95% confidence interval, 1.26-22.9; P = 0.03). By contrast, Corynebacterium/Dolosigranulum-dominant profile group had low proportion of infants with bronchiolitis (17%); the likelihood of bronchiolitis in this group did not significantly differ from those with Moraxella-dominant profile in both unadjusted and adjusted analyses. In this case-control study, we identified 4 distinct nasal airway microbiota profiles in infants. Moraxella-dominant and Corynebacterium/Dolosigranulum-dominant profiles were associated with low likelihood of bronchiolitis, while Staphylococcus-dominant profile was associated with high likelihood of bronchiolitis.
NASA Astrophysics Data System (ADS)
Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.; Pontzen, A.
2018-04-01
We present the first self-consistent prediction for the distribution of formation time-scales for close supermassive black hole (SMBH) pairs following galaxy mergers. Using ROMULUS25, the first large-scale cosmological simulation to accurately track the orbital evolution of SMBHs within their host galaxies down to sub-kpc scales, we predict an average formation rate density of close SMBH pairs of 0.013 cMpc-3 Gyr-1. We find that it is relatively rare for galaxy mergers to result in the formation of close SMBH pairs with sub-kpc separation and those that do form are often the result of Gyr of orbital evolution following the galaxy merger. The likelihood and time-scale to form a close SMBH pair depends strongly on the mass ratio of the merging galaxies, as well as the presence of dense stellar cores. Low stellar mass ratio mergers with galaxies that lack a dense stellar core are more likely to become tidally disrupted and deposit their SMBH at large radii without any stellar core to aid in their orbital decay, resulting in a population of long-lived `wandering' SMBHs. Conversely, SMBHs in galaxies that remain embedded within a stellar core form close pairs in much shorter time-scales on average. This time-scale is a crucial, though often ignored or very simplified, ingredient to models predicting SMBH mergers rates and the connection between SMBH and star formation activity.
Chaikriangkrai, Kongkiat; Jhun, Hye Yeon; Shantha, Ghanshyam Palamaner Subash; Abdulhak, Aref Bin; Tandon, Rudhir; Alqasrawi, Musab; Klappa, Anthony; Pancholy, Samir; Deshmukh, Abhishek; Bhama, Jay; Sigurdsson, Gardar
2018-07-01
In aortic stenosis patients referred for surgical and transcatheter aortic valve replacement (AVR), the evidence of diagnostic accuracy of coronary computed tomography angiography (CCTA) has been limited. The objective of this study was to investigate the diagnostic accuracy of CCTA for significant coronary artery disease (CAD) in patients referred for AVR using invasive coronary angiography (ICA) as the gold standard. We searched databases for all diagnostic studies of CCTA in patients referred for AVR, which reported diagnostic testing characteristics on patient-based analysis required to pool summary sensitivity, specificity, positive-likelihood ratio, and negative-likelihood ratio. Significant CAD in both CCTA and ICA was defined by >50% stenosis in any coronary artery, coronary stent, or bypass graft. Thirteen studies evaluated 1498 patients (mean age, 74 y; 47% men; 76% transcatheter AVR). The pooled prevalence of significant stenosis determined by ICA was 43%. Hierarchical summary receiver-operating characteristic analysis demonstrated a summary area under curve of 0.96. The pooled sensitivity, specificity, and positive-likelihood and negative-likelihood ratios of CCTA in identifying significant stenosis determined by ICA were 95%, 79%, 4.48, and 0.06, respectively. In subgroup analysis, the diagnostic profiles of CCTA were comparable between surgical and transcatheter AVR. Despite the higher prevalence of significant CAD in patients with aortic stenosis than with other valvular heart diseases, our meta-analysis has shown that CCTA has a suitable diagnostic accuracy profile as a gatekeeper test for ICA. Our study illustrates a need for further study of the potential role of CCTA in preoperative planning for AVR.
Wang, Jiun-Hao; Chang, Hung-Hao
2010-10-26
In contrast to the considerable body of literature concerning the disabilities of the general population, little information exists pertaining to the disabilities of the farm population. Focusing on the disability issue to the insurants in the Farmers' Health Insurance (FHI) program in Taiwan, this paper examines the associations among socio-demographic characteristics, insured factors, and the introduction of the national health insurance program, as well as the types and payments of disabilities among the insurants. A unique dataset containing 1,594,439 insurants in 2008 was used in this research. A logistic regression model was estimated for the likelihood of received disability payments. By focusing on the recipients, a disability payment and a disability type equation were estimated using the ordinary least squares method and a multinomial logistic model, respectively, to investigate the effects of the exogenous factors on their received payments and the likelihood of having different types of disabilities. Age and different job categories are significantly associated with the likelihood of receiving disability payments. Compared to those under age 45, the likelihood is higher among recipients aged 85 and above (the odds ratio is 8.04). Compared to hired workers, the odds ratios for self-employed and spouses of farm operators who were not members of farmers' associations are 0.97 and 0.85, respectively. In addition, older insurants are more likely to have eye problems; few differences in disability types are related to insured job categories. Results indicate that older farmers are more likely to receive disability payments, but the likelihood is not much different among insurants of various job categories. Among all of the selected types of disability, a highest likelihood is found for eye disability. In addition, the introduction of the national health insurance program decreases the likelihood of receiving disability payments. The experience in Taiwan can be valuable for other countries that are in an initial stage to implement a universal health insurance program.
Maximum likelihood convolutional decoding (MCD) performance due to system losses
NASA Technical Reports Server (NTRS)
Webster, L.
1976-01-01
A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.
Balluz, Lina; Wen, Xiao-Jun; Town, Machell; Shire, Jeffrey D; Qualter, Judy; Mokdad, Ali
2007-01-01
Ischemic heart disease (IHD) is one of the most common health threats to the adult population of the U.S. and other countries. The objective of this study was to examine the association between exposure to elevated annual average levels of Particulate matter 2.5 (PM2.5) air quality index (AQI) and IHD in the general population. We combined data from the Behavioral Risk Factor Surveillance System and the U.S Environmental Protection Agency air quality database. We analyzed the data using SUDAAN software to adjust the effects of sampling bias, weights, and design effects. The prevalence of IHD was 9.6% among respondents who were exposed to an annual average level of PM2.5 AQI > 60 compared with 5.9% among respondents exposed to an annual average PM2.5 AQI < or = 60. The respondents with higher levels of PM2.5 AQI exposure were more likely to have IHD (adjusted odds ratio = 1.72, 95% confidence interval 1.11, 2.66) than respondents with lower levels of exposure after adjusting for age, gender, race/ethnicity, education, smoking, body mass index, diabetes, hypertension, and hypercholesterolemia. Our study suggested that exposure to relatively higher levels of average annual PM2.5 AQI may increase the likelihood of IHD. In addition to encouraging health-related behavioral changes to reduce IHD, efforts should also focus on implementing appropriate measures to reduce exposure to unhealthy AQI levels.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
Display size effects in visual search: analyses of reaction time distributions as mixtures.
Reynolds, Ann; Miller, Jeff
2009-05-01
In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.
Weemhoff, M; Kluivers, K B; Govaert, B; Evers, J L H; Kessels, A G H; Baeten, C G
2013-03-01
This study concerns the level of agreement between transperineal ultrasound and evacuation proctography for diagnosing enteroceles and intussusceptions. In a prospective observational study, 50 consecutive women who were planned to have an evacuation proctography underwent transperineal ultrasound too. Sensitivity, specificity, positive (PPV) and negative predictive value, as well as the positive and negative likelihood ratio of transperineal ultrasound were assessed in comparison to evacuation proctography. To determine the interobserver agreement of transperineal ultrasound, the quadratic weighted kappa was calculated. Furthermore, receiver operating characteristic curves were generated to show the diagnostic capability of transperineal ultrasound. For diagnosing intussusceptions (PPV 1.00), a positive finding on transperineal ultrasound was predictive of an abnormal evacuation proctography. Sensitivity of transperineal ultrasound was poor for intussusceptions (0.25). For diagnosing enteroceles, the positive likelihood ratio was 2.10 and the negative likelihood ratio, 0.85. There are many false-positive findings of enteroceles on ultrasonography (PPV 0.29). The interobserver agreement of the two ultrasonographers assessed as the quadratic weighted kappa of diagnosing enteroceles was 0.44 and that of diagnosing intussusceptions was 0.23. An intussusception on ultrasound is predictive of an abnormal evacuation proctography. For diagnosing enteroceles, the diagnostic quality of transperineal ultrasound was limited compared to evacuation proctography.
Is it possible to predict office hysteroscopy failure?
Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola
2014-10-01
The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A LANDSAT study of ephemeral and perennial rangeland vegetation and soils
NASA Technical Reports Server (NTRS)
Bentley, R. G., Jr. (Principal Investigator); Salmon-Drexler, B. C.; Bonner, W. J.; Vincent, R. K.
1976-01-01
The author has identified the following significant results. Several methods of computer processing were applied to LANDSAT data for mapping vegetation characteristics of perennial rangeland in Montana and ephemeral rangeland in Arizona. The choice of optimal processing technique was dependent on prescribed mapping and site condition. Single channel level slicing and ratioing of channels were used for simple enhancement. Predictive models for mapping percent vegetation cover based on data from field spectra and LANDSAT data were generated by multiple linear regression of six unique LANDSAT spectral ratios. Ratio gating logic and maximum likelihood classification were applied successfully to recognize plant communities in Montana. Maximum likelihood classification did little to improve recognition of terrain features when compared to a single channel density slice in sparsely vegetated Arizona. LANDSAT was found to be more sensitive to differences between plant communities based on percentages of vigorous vegetation than to actual physical or spectral differences among plant species.
Identifying Malignant Pleural Effusion by A Cancer Ratio (Serum LDH: Pleural Fluid ADA Ratio).
Verma, Akash; Abisheganaden, John; Light, R W
2016-02-01
We studied the diagnostic potential of serum lactate dehydrogenase (LDH) in malignant pleural effusion. Retrospective analysis of patients hospitalized with exudative pleural effusion in 2013. Serum LDH and serum LDH: pleural fluid ADA ratio was significantly higher in cancer patients presenting with exudative pleural effusion. In multivariate logistic regression analysis, pleural fluid ADA was negatively correlated 0.62 (0.45-0.85, p = 0.003) with malignancy, whereas serum LDH 1.02 (1.0-1.03, p = 0.004) and serum LDH: pleural fluid ADA ratio 0.94 (0.99-1.0, p = 0.04) was correlated positively with malignant pleural effusion. For serum LDH: pleural fluid ADA ratio, a cut-off level of >20 showed sensitivity, specificity of 0.98 (95 % CI 0.92-0.99) and 0.94 (95 % CI 0.83-0.98), respectively. The positive likelihood ratio was 32.6 (95 % CI 10.7-99.6), while the negative likelihood ratio at this cut-off was 0.03 (95 % CI 0.01-0.15). Higher serum LDH and serum LDH: pleural fluid ADA ratio in patients presenting with exudative pleural effusion can distinguish between malignant and non-malignant effusion on the first day of hospitalization. The cut-off level for serum LDH: pleural fluid ADA ratio of >20 is highly predictive of malignancy in patients with exudative pleural effusion (whether lymphocytic or neutrophilic) with high sensitivity and specificity.
Navathe, Amol S; Volpp, Kevin G; Konetzka, R Tamara; Press, Matthew J; Zhu, Jingsan; Chen, Wei; Lindrooth, Richard C
2012-08-01
Quality of care may be linked to the profitability of admissions in addition to level of reimbursement. Prior policy reforms reduced payments that differentially affected the average profitability of various admission types. The authors estimated a Cox competing risks model, controlling for the simultaneous risk of mortality post discharge, to determine whether the average profitability of hospital service lines to which a patient was admitted was associated with the likelihood of readmission within 30 days. The sample included 12,705,933 Medicare Fee for Service discharges from 2,438 general acute care hospitals during 1997, 2001, and 2005. There was no evidence of an association between changes in average service line profitability and changes in readmission risk, even when controlling for risk of mortality. These findings are reassuring in that the profitability of patients' admissions did not affect readmission rates, and together with other evidence may suggest that readmissions are not an unambiguous quality indicator for in-hospital care.
Can We Rule Out Meningitis from Negative Jolt Accentuation? A Retrospective Cohort Study.
Sato, Ryota; Kuriyama, Akira; Luthe, Sarah Kyuragi
2017-04-01
Jolt accentuation has been considered to be the most sensitive physical finding to predict meningitis. However, there are only a few studies assessing the diagnostic accuracy of jolt accentuation. Therefore, we aimed to evaluate the diagnostic accuracy of jolt accentuation and investigate whether it can be extended to patients with mild altered mental status. We performed a single center, retrospective observational study on patients who presented to the emergency department in a Japanese tertiary care center from January 1, 2010 to March 31, 2016. Jolt accentuation evaluated in patients with fever, headache, and mild altered mental status with Glasgow Coma Scale no lower than E2 or M4 was defined as "jolt accentuation in the broad sense." Jolt accentuation evaluated in patients with fever, headache, and no altered mental status was defined as "jolt accentuation in the narrow sense." We evaluated the sensitivity and specificity in both groups. Among 118 patients, the sensitivity and specificity of jolt accentuation in the broad sense were 70.7% (95% confidence interval (CI): 58.0%-80.8%) and 36.7% (95% CI: 25.6%-49.3%). The positive likelihood ratio and negative likelihood ratio were 1.12 (95% CI: 0.87-1.44) and 0.80 (95% CI: 0.48-1.34), respectively. Among 108 patients, the sensitivity and specificity of jot accentuation in the narrow sense were 75.0% (95% CI: 61.8%-84.8%) and 35.1% (95% CI: 24.0%-48.0%). The positive likelihood ratio and negative likelihood ratio were 1.16 (95% CI: 0.90-1.48) and 0.71 (95% CI: 0.40-1.28), respectively. Jolt accentuation itself has a limited value in the diagnosis of meningitis regardless of altered mental status. Therefore, meningitis should not be ruled out by negative jolt accentuation. © 2017 American Headache Society.
NASA Astrophysics Data System (ADS)
Sembiring, J.; Jones, F.
2018-03-01
Red cell Distribution Width (RDW) and platelet ratio (RPR) can predict liver fibrosis and cirrhosis in chronic hepatitis B with relatively high accuracy. RPR was superior to other non-invasive methods to predict liver fibrosis, such as AST and ALT ratio, AST and platelet ratio Index and FIB-4. The aim of this study was to assess diagnostic accuracy liver fibrosis by using RDW and platelets ratio in chronic hepatitis B patients based on compared with Fibroscan. This cross-sectional study was conducted at Adam Malik Hospital from January-June 2015. We examine 34 patients hepatitis B chronic, screen RDW, platelet, and fibroscan. Data were statistically analyzed. The result RPR with ROC procedure has an accuracy of 72.3% (95% CI: 84.1% - 97%). In this study, the RPR had a moderate ability to predict fibrosis degree (p = 0.029 with AUC> 70%). The cutoff value RPR was 0.0591, sensitivity and spesificity were 71.4% and 60%, Positive Prediction Value (PPV) was 55.6% and Negative Predictions Value (NPV) was 75%, positive likelihood ratio was 1.79 and negative likelihood ratio was 0.48. RPR have the ability to predict the degree of liver fibrosis in chronic hepatitis B patients with moderate accuracy.
Validation of the portable Air-Smart Spirometer
Núñez Fernández, Marta; Pallares Sanmartín, Abel; Mouronte Roibas, Cecilia; Cerdeira Domínguez, Luz; Botana Rial, Maria Isabel; Blanco Cid, Nagore; Fernández Villar, Alberto
2018-01-01
Background The Air-Smart Spirometer is the first portable device accepted by the European Community (EC) that performs spirometric measurements by a turbine mechanism and displays the results on a smartphone or a tablet. Methods In this multicenter, descriptive and cross-sectional prospective study carried out in 2 hospital centers, we compare FEV1, FVC, FEV1/FVC ratio measured with the Air Smart-Spirometer device and a conventional spirometer, and analyze the ability of this new portable device to detect obstructions. Patients were included for 2 consecutive months. We calculate sensitivity, specificity, positive and negative predictive value (PPV and NPV) and likelihood ratio (LR +, LR-) as well as the Kappa Index to evaluate the concordance between the two devices for the detection of obstruction. The agreement and relation between the values of FEV1 and FVC in absolute value and the FEV1/FVC ratio measured by both devices were analyzed by calculating the intraclass correlation coefficient (ICC) and the Pearson correlation coefficient (r) respectively. Results 200 patients (100 from each center) were included with a mean age of 57 (± 14) years, 110 were men (55%). Obstruction was detected by conventional spirometry in 73 patients (40.1%). Using a FEV1/FVC ratio smaller than 0.7 to detect obstruction with the Air Smart-Spirometer, the kappa index was 0.88, sensitivity (90.4%), specificity (97.2%), PPV (95.7%), NPV (93.7%), positive likelihood ratio (32.29), and negative likelihood ratio (0.10). The ICC and r between FEV1, FVC, and FEV1 / FVC ratio measured by the Air Smart Spirometer and the conventional spirometer were all higher than 0.94. Conclusion The Air-Smart Spirometer is a simple and very precise instrument for detecting obstructive airway diseases. It is easy to use, which could make it especially useful non-specialized care and in other areas. PMID:29474502
Risk Indicators for Periodontitis in US Adults: NHANES 2009 to 2012.
Eke, Paul I; Wei, Liang; Thornton-Evans, Gina O; Borrell, Luisa N; Borgnakke, Wenche S; Dye, Bruce; Genco, Robert J
2016-10-01
Through the use of optimal surveillance measures and standard case definitions, it is now possible to more accurately determine population-average risk profiles for severe (SP) and non-severe periodontitis (NSP) in adults (aged 30 years and older) in the United States. Data from the 2009 to 2012 National Health and Nutrition Examination Survey were used, which, for the first time, used the "gold standard" full-mouth periodontitis surveillance protocol to classify severity of periodontitis following suggested Centers for Disease Control/American Academy of Periodontology case definitions. Probabilities of periodontitis by: 1) sociodemographics, 2) behavioral factors, and 3) comorbid conditions were assessed using prevalence ratios (PRs) estimated by predicted marginal probability from multivariable generalized logistic regression models. Analyses were further stratified by sex for each classification of periodontitis. Likelihood of total periodontitis (TP) increased with age for overall and NSP relative to non-periodontitis. Compared with non-Hispanic whites, TP was more likely in Hispanics (adjusted [a]PR = 1.38; 95% confidence interval 95% CI: 1.26 to 1.52) and non-Hispanic blacks (aPR = 1.35; 95% CI: 1.22 to 1.50), whereas SP was most likely in non-Hispanic blacks (aPR = 1.82; 95% CI: 1.44 to 2.31). There was at least a 50% greater likelihood of TP in current smokers compared with non-smokers. In males, likelihood of TP in adults aged 65 years and older was greater (aPR = 2.07; 95% CI: 1.76 to 2.43) than adults aged 30 to 44 years. This probability was even greater in women (aPR = 3.15; 95% CI: 2.63 to 3.77). Likelihood of TP was higher in current smokers relative to non-smokers regardless of sex and periodontitis classification. TP was more likely in men with uncontrolled diabetes mellitus (DM) compared with adults without DM. Assessment of risk profiles for periodontitis in adults in the United States based on gold standard periodontal measures show important differences by severity of disease and sex. Cigarette smoking, specifically current smoking, remains an important modifiable risk for all levels of periodontitis severity. Higher likelihood of TP in older adults and in males with uncontrolled DM is noteworthy. These findings could improve identification of target populations for effective public health interventions to improve periodontal health of adults in the United States.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Liu, Bo-Ji; Li, Dan-Dan; Xu, Hui-Xiong; Guo, Le-Hang; Zhang, Yi-Feng; Xu, Jun-Mei; Liu, Chang; Liu, Lin-Na; Li, Xiao-Long; Xu, Xiao-Hong; Qu, Shen; Xing, Mingzhao
2015-12-01
The aim of this study was to evaluate the diagnostic performance of quantitative shear wave velocity (SWV) measurement on acoustic radiation force impulse (ARFI) elastography for differentiation between benign and malignant thyroid nodules using meta-analysis. The databases of PubMed and the Web of Science were searched. Studies published in English on assessment of the sensitivity and specificity of ARFI elastography for the differentiation of thyroid nodules were collected. The quantitative measurement of ARFI elastography was evaluated by SWV (m/s). Meta-Disc Version 1.4 software was used to describe and calculate the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio and summary receiver operating characteristic curves. We analyzed a total of 13 studies, which included 1,854 thyroid nodules (including 1,339 benign nodules and 515 malignant nodules) from 1,641 patients. The summary sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules by SWV were 0.81 (95% confidence interval [CI]: 0.77-0.84) and 0.84 (95% CI: 0.81-0.86), respectively. The pooled positive and negative likelihood ratios were 5.21 (95% CI: 3.56-7.62) and 0.23 (95% CI: 0.17-0.32), respectively. The pooled diagnostic odds ratio was 27.53 (95% CI: 14.58-52.01), and the area under the summary receiver operating characteristic curve was 0.91 (Q* = 0.84). In conclusion, SWV measurement on ARFI elastography has high sensitivity and specificity for differential diagnosis between benign and malignant thyroid nodules and can be used in combination with conventional ultrasound. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Srkalović Imširagić, Azijada; Begić, Dražen; Šimičević, Livija; Bajić, Žarko
2017-02-01
Following childbirth, a vast number of women experience some degree of mood swings, while some experience symptoms of postpartum posttraumatic stress disorder. Using a biopsychosocial model, the primary aim of this study was to identify predictors of posttraumatic stress disorder and its symptomatology following childbirth. This observational, longitudinal study included 372 postpartum women. In order to explore biopsychosocial predictors, participants completed several questionnaires 3-5 days after childbirth: the Impact of Events Scale Revised, the Big Five Inventory, The Edinburgh Postnatal Depression Scale, breastfeeding practice and social and demographic factors. Six to nine weeks after childbirth, participants re-completed the questionnaires regarding psychiatric symptomatology and breastfeeding practice. Using a multivariate level of analysis, the predictors that increased the likelihood of postpartum posttraumatic stress disorder symptomatology at the first study phase were: emergency caesarean section (odds ratio 2.48; confidence interval 1.13-5.43) and neuroticism personality trait (odds ratio 1.12; confidence interval 1.05-1.20). The predictor that increased the likelihood of posttraumatic stress disorder symptomatology at the second study phase was the baseline Impact of Events Scale Revised score (odds ratio 12.55; confidence interval 4.06-38.81). Predictors that decreased the likelihood of symptomatology at the second study phase were life in a nuclear family (odds ratio 0.27; confidence interval 0.09-0.77) and life in a city (odds ratio 0.29; confidence interval 0.09-0.94). Biopsychosocial theory is applicable to postpartum psychiatric disorders. In addition to screening for depression amongst postpartum women, there is a need to include other postpartum psychiatric symptomatology screenings in routine practice. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Kim, Hye Jeong; Kwak, Mi Kyung; Choi, In Ho; Jin, So-Young; Park, Hyeong Kyu; Byun, Dong Won; Suh, Kyoil; Yoo, Myung Hi
2018-02-23
The aim of this study was to address the role of the elasticity index as a possible predictive marker for detecting papillary thyroid carcinoma (PTC) and quantitatively assess shear wave elastography (SWE) as a tool for differentiating PTC from benign thyroid nodules. One hundred and nineteen patients with thyroid nodules undergoing SWE before ultrasound-guided fine needle aspiration and core needle biopsy were analyzed. The mean (EMean), minimum (EMin), maximum (EMax), and standard deviation (ESD) of SWE elasticity indices were measured. Among 105 nodules, 14 were PTC and 91 were benign. The EMean, EMin, and EMax values were significantly higher in PTCs than benign nodules (EMean 37.4 in PTC vs. 23.7 in benign nodules, p = 0.005; EMin 27.9 vs. 17.8, p = 0.034; EMax 46.7 vs. 31.5, p < 0.001). The EMean, EMin, and EMax were significantly associated with PTC with diagnostic odds ratios varying from 6.74 to 9.91, high specificities (86.4%, 86.4%, and 88.1%, respectively), and positive likelihood ratios (4.21, 3.69, and 4.82, respectively). The ESD values were significantly higher in PTC than in benign nodules (6.3 vs. 2.6, p < 0.001). ESD had the highest specificity (96.6%) when applied with a cut-off value of 6.5 kPa. It had a positive likelihood ratio of 14.75 and a diagnostic odds ratio of 28.50. The shear elasticity index of ESD, with higher likelihood ratios for PTC, will probably identify nodules that have a high potential for malignancy. It may help to identify and select malignant nodules, while reducing unnecessary fine needle aspiration and core needle biopsies of benign nodules.
Shifts in the seasonal distribution of deaths in Australia, 1968-2007
NASA Astrophysics Data System (ADS)
Bennett, Charmian M.; Dear, Keith B. G.; McMichael, Anthony J.
2014-07-01
Studies in temperate countries have shown that both hot weather in summer and cold weather in winter increase short-term (daily) mortality. The gradual warming, decade on decade, that Australia has experienced since the 1960s, might therefore be expected to have differentially affected mortality in the two seasons, and thus indicate an early impact of climate change on human health. Failure to detect such a signal would challenge the widespread assumption that the effect of weather on mortality implies a similar effect of a change from the present to projected future climate. We examine the ratio of summer to winter deaths against a background of rising average annual temperatures over four decades: the ratio has increased from 0.71 to 0.86 since 1968. The same trend, albeit of varying strength, is evident in all states of Australia, in four age groups (aged 55 years and above) and in both sexes. Analysis of cause-specific mortality suggests that the change has so far been driven more by reduced winter mortality than by increased summer mortality. Furthermore, comparisons of this seasonal mortality ratio calculated in the warmest subsets of seasons in each decade, with that calculated in the coldest seasons, show that particularly warm annual conditions, which mimic the expected temperatures of future climate change, increase the likelihood of higher ratios (approaching 1:1). Overall, our results indicate that gradual climate change, as well as short-term weather variations, affect patterns of mortality.
NASA Technical Reports Server (NTRS)
Bueno, R. A.
1977-01-01
Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.
NASA Technical Reports Server (NTRS)
Hall, Steven R.; Walker, Bruce K.
1990-01-01
A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Unified halo-independent formalism from convex hulls for direct dark matter searches
NASA Astrophysics Data System (ADS)
Gelmini, Graciela B.; Huh, Ji-Haeng; Witte, Samuel J.
2017-12-01
Using the Fenchel-Eggleston theorem for convex hulls (an extension of the Caratheodory theorem), we prove that any likelihood can be maximized by either a dark matter 1- speed distribution F(v) in Earth's frame or 2- Galactic velocity distribution fgal(vec u), consisting of a sum of delta functions. The former case applies only to time-averaged rate measurements and the maximum number of delta functions is (Script N‑1), where Script N is the total number of data entries. The second case applies to any harmonic expansion coefficient of the time-dependent rate and the maximum number of terms is Script N. Using time-averaged rates, the aforementioned form of F(v) results in a piecewise constant unmodulated halo function tilde eta0BF(vmin) (which is an integral of the speed distribution) with at most (Script N-1) downward steps. The authors had previously proven this result for likelihoods comprised of at least one extended likelihood, and found the best-fit halo function to be unique. This uniqueness, however, cannot be guaranteed in the more general analysis applied to arbitrary likelihoods. Thus we introduce a method for determining whether there exists a unique best-fit halo function, and provide a procedure for constructing either a pointwise confidence band, if the best-fit halo function is unique, or a degeneracy band, if it is not. Using measurements of modulation amplitudes, the aforementioned form of fgal(vec u), which is a sum of Galactic streams, yields a periodic time-dependent halo function tilde etaBF(vmin, t) which at any fixed time is a piecewise constant function of vmin with at most Script N downward steps. In this case, we explain how to construct pointwise confidence and degeneracy bands from the time-averaged halo function. Finally, we show that requiring an isotropic Galactic velocity distribution leads to a Galactic speed distribution F(u) that is once again a sum of delta functions, and produces a time-dependent tilde etaBF(vmin, t) function (and a time-averaged tilde eta0BF(vmin)) that is piecewise linear, differing significantly from best-fit halo functions obtained without the assumption of isotropy.
A soft-hard combination-based cooperative spectrum sensing scheme for cognitive radio networks.
Do, Nhu Tri; An, Beongku
2015-02-13
In this paper we propose a soft-hard combination scheme, called SHC scheme, for cooperative spectrum sensing in cognitive radio networks. The SHC scheme deploys a cluster based network in which Likelihood Ratio Test (LRT)-based soft combination is applied at each cluster, and weighted decision fusion rule-based hard combination is utilized at the fusion center. The novelties of the SHC scheme are as follows: the structure of the SHC scheme reduces the complexity of cooperative detection which is an inherent limitation of soft combination schemes. By using the LRT, we can detect primary signals in a low signal-to-noise ratio regime (around an average of -15 dB). In addition, the computational complexity of the LRT is reduced since we derive the closed-form expression of the probability density function of LRT value. The SHC scheme also takes into account the different effects of large scale fading on different users in the wide area network. The simulation results show that the SHC scheme not only provides the better sensing performance compared to the conventional hard combination schemes, but also reduces sensing overhead in terms of reporting time compared to the conventional soft combination scheme using the LRT.
Achieving serum urate goal: a comparative effectiveness study between allopurinol and febuxostat.
Hatoum, Hind; Khanna, Dinesh; Lin, Swu-Jane; Akhras, Kasem S; Shiozawa, Aki; Khanna, Puja
2014-03-01
Febuxostat is recommended as 1 of 2 first-line urate-lowering therapies (ULT) for treating gout in the 2012 American College of Rheumatology Guidelines. Several efficacy trials have compared febuxostat with allopurinol treatment, but real-world comparative data are limited. We compared effectiveness of the 2 agents in reaching serum urate (sUA) level goal (< 6 mg/dL) within 6 months (main endpoint), factors impacting the likelihood of reaching goal, and outcomes in allopurinol patients who were switched to febuxostat therapy after failing to reach sUA level goal. Data from the General Electric Electronic Medical Record database on adult patients with newly diagnosed gout, who had started treatment with allopurinol or febuxostat in 2009 or thereafter were analyzed. Descriptive statistics, bivariate analyses, and logistic regressions were used. Allopurinol (n = 17 199) and febuxostat (n = 1190) patients had a mean ± standard deviation (SD) age of 63.7 (± 13.37) years; most patients were men and white. Average daily medication doses (mg) in the first 6 months were 184.9 ± 96.7 and 48.4 ± 15.8 for allopurinol- and febuxostat-treated patients, respectively; 4.8% of allopurinol-treated patients switched to febuxostat, whereas 25.7% of febuxostat-treated patients switched to allopurinol. Febuxostat patients had lower estimated glomerular filtration rate levels, more diabetes mellitus, or tophi at baseline (P < 0.05) and 29.2% and 42.2% of patients in the allopurinol and febuxostat groups achieved goal sUA levels (P < 0.0001). Febuxostat was significantly more effective in patients reaching sUA goal (adjusted odds ratio, 1.73; 95% CI, 1.48-2.01). Older patients and women had greater likelihood of reaching sUA goal level; however, patients with higher Charlson Comorbidity Index scores, blacks, or those with estimated glomerular filtration rates between 15 to ≤ 60 mL/min had reduced likelihood of attaining goal (P < 0.05). Among allopurinol-treated patients who were switched to febuxostat after failing to reach goal, 244 (48.3%) reached goal on febuxostat (median = 62.5 days), with an average 39% sUA level reduction achieved within 6 months. Patients who did not reach goal had a 14.3% sUA level reduction. The real-life data support the effectiveness of febuxostat in managing patients with gout.
76 FR 18221 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-01
... Ratio Standard for a State's Individual Market; Use: Under section 2718 of the Public Health Service Act... data allows for the calculation of an issuer's medical loss ratio (MLR) by market (individual, small... whether market destabilization has a high likelihood of occurring. Form Number: CMS-10361 (OMB Control No...
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
How to get statistically significant effects in any ERP experiment (and why you shouldn't).
Luck, Steven J; Gaspelin, Nicholas
2017-01-01
ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. © 2016 Society for Psychophysiological Research.
How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)
Luck, Steven J.; Gaspelin, Nicholas
2016-01-01
Event-related potential (ERP) experiments generate massive data sets, often containing thousands of values for each participant, even after averaging. The richness of these data sets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand average data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multi-factor statistical analyses. Re-analyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant-but-bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. PMID:28000253
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Silveira, Maria J; Copeland, Laurel A; Feudtner, Chris
2006-07-01
We tested whether local cultural and social values regarding the use of health care are associated with the likelihood of home death, using variation in local rates of home births as a proxy for geographic variation in these values. For each of 351110 adult decedents in Washington state who died from 1989 through 1998, we calculated the home birth rate in each zip code during the year of death and then used multivariate regression modeling to estimate the relation between the likelihood of home death and the local rate of home births. Individuals residing in local areas with higher home birth rates had greater adjusted likelihood of dying at home (odds ratio [OR]=1.04 for each percentage point increase in home birth rate; 95% confidence interval [CI] = 1.03, 1.05). Moreover, the likelihood of dying at home increased with local wealth (OR=1.04 per $10000; 95% CI=1.02, 1.06) but decreased with local hospital bed availability (OR=0.96 per 1000 beds; 95% CI=0.95, 0.97). The likelihood of home death is associated with local rates of home births, suggesting the influence of health care use preferences.
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
Comparison between presepsin and procalcitonin in early diagnosis of neonatal sepsis.
Iskandar, Agustin; Arthamin, Maimun Z; Indriana, Kristin; Anshory, Muhammad; Hur, Mina; Di Somma, Salvatore
2018-05-09
Neonatal sepsis remains worldwide one of the leading causes of morbidity and mortality in both term and preterm infants. Lower mortality rates are related to timely diagnostic evaluation and prompt initiation of empiric antibiotic therapy. Blood culture, as gold standard examination for sepsis, has several limitations for early diagnosis, so that sepsis biomarkers could play an important role in this regard. This study was aimed to compare the value of the two biomarkers presepsin and procalcitonin in early diagnosis of neonatal sepsis. This was a prospective cross-sectional study performed, in Saiful Anwar General Hospital Malang, Indonesia, in 51 neonates that fulfill the criteria of systemic inflammatory response syndrome (SIRS) with blood culture as diagnostic gold standard for sepsis. At reviewer operating characteristic (ROC) curve analyses, using a presepsin cutoff of 706,5 pg/mL, the obtained area under the curve (AUCs) were: sensitivity = 85.7%, specificity = 68.8%, positive predictive value = 85.7%, negative predictive value = 68.8%, positive likelihood ratio = 2.75, negative likelihood ratio = 0.21, and accuracy = 80.4%. On the other hand, with a procalcitonin cutoff value of 161.33 pg/mL the obtained AUCs showed: sensitivity = 68.6%, specificity = 62.5%, positive predictive value = 80%, negative predictive value = 47.6%, positive likelihood ratio = 1.83, the odds ratio negative = 0.5, and accuracy = 66.7%. In early diagnosis of neonatal sepsis, compared with procalcitonin, presepsin seems to provide better early diagnostic value with consequent possible faster therapeutical decision making and possible positive impact on outcome of neonates.
The effect of rare variants on inflation of the test statistics in case-control analyses.
Pirie, Ailith; Wood, Angela; Lush, Michael; Tyrer, Jonathan; Pharoah, Paul D P
2015-02-20
The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.
Application of an Elongated Kelvin Model to Space Shuttle Foams
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.; Ghosn, Louis J.; Lerch, Bradley A.
2008-01-01
Spray-on foam insulation is applied to the exterior of the Space Shuttle s External Tank to limit propellant boil-off and to prevent ice formation. The Space Shuttle foams are rigid closed-cell polyurethane foams. The two foams used most extensively on the Space Shuttle External Tank are BX-265 and NCFI24-124. Since the catastrophic loss of the Space Shuttle Columbia, numerous studies have been conducted to mitigate the likelihood and the severity of foam shedding during the Shuttle s ascent to space. Due to the foaming and rising process, the foam microstructures are elongated in the rise direction. As a result, these two foams exhibit a non-isotropic mechanical behavior. In this paper, a detailed microstructural characterization of the two foams is presented. The key features of the foam cells are summarized and the average cell dimensions in the two foams are compared. Experimental studies to measure the room temperature mechanical response of the two foams in the two principal material directions (parallel to the rise and perpendicular to the rise) are also reported. The measured elastic modulus, proportional limit stress, ultimate tensile stress and the Poisson s ratios for the two foams are compared. The generalized elongated Kelvin foam model previously developed by the authors is reviewed and the equations which result from this model are presented. The resulting equations show that the ratio of the elastic modulus in the rise direction to that in the perpendicular-to-rise direction as well as the ratio of the strengths in the two material directions is only a function of the microstructural dimensions. Using the measured microstructural dimensions and the measured stiffness ratio, the foam tensile strength ratio and Poisson s ratios are predicted for both foams. The predicted tensile strength ratio is in close agreement with the measured strength ratios for both BX-265 and NCFI24-124. The comparison between the predicted Poisson s ratios and the measured values is not as favorable.
Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç
2011-03-01
The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.
Identifying common donors in DNA mixtures, with applications to database searches.
Slooten, K
2017-01-01
Several methods exist to compute the likelihood ratio LR(M, g) evaluating the possible contribution of a person of interest with genotype g to a mixed trace M. In this paper we generalize this LR to a likelihood ratio LR(M 1 , M 2 ) involving two possibly mixed traces M 1 and M 2 , where the question is whether there is a donor in common to both traces. In case one of the traces is in fact a single genotype, then this likelihood ratio reduces to the usual LR(M, g). We explain how our method conceptually is a logical consequence of the fact that LR calculations of the form LR(M, g) can be equivalently regarded as a probabilistic deconvolution of the mixture. Based on simulated data, and using a semi-continuous mixture evaluation model, we derive ROC curves of our method applied to various types of mixtures. From these data we conclude that searches for a common donor are often feasible in the sense that a very small false positive rate can be combined with a high probability to detect a common donor if there is one. We also show how database searches comparing all traces to each other can be carried out efficiently, as illustrated by the application of the method to the mixed traces in the Dutch DNA database. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism.
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-02-10
Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism.
Analysis of case-parent trios at a locus with a deletion allele: association of GSTM1 with autism
Buyske, Steven; Williams, Tanishia A; Mars, Audrey E; Stenroos, Edward S; Ming, Sue X; Wang, Rong; Sreenath, Madhura; Factura, Marivic F; Reddy, Chitra; Lambert, George H; Johnson, William G
2006-01-01
Background Certain loci on the human genome, such as glutathione S-transferase M1 (GSTM1), do not permit heterozygotes to be reliably determined by commonly used methods. Association of such a locus with a disease is therefore generally tested with a case-control design. When subjects have already been ascertained in a case-parent design however, the question arises as to whether the data can still be used to test disease association at such a locus. Results A likelihood ratio test was constructed that can be used with a case-parents design but has somewhat less power than a Pearson's chi-squared test that uses a case-control design. The test is illustrated on a novel dataset showing a genotype relative risk near 2 for the homozygous GSTM1 deletion genotype and autism. Conclusion Although the case-control design will remain the mainstay for a locus with a deletion, the likelihood ratio test will be useful for such a locus analyzed as part of a larger case-parent study design. The likelihood ratio test has the advantage that it can incorporate complete and incomplete case-parent trios as well as independent cases and controls. Both analyses support (p = 0.046 for the proposed test, p = 0.028 for the case-control analysis) an association of the homozygous GSTM1 deletion genotype with autism. PMID:16472391
Yang, Ji; Gu, Hongya; Yang, Ziheng
2004-01-01
Chalcone synthase (CHS) is a key enzyme in the biosynthesis of flavonoides, which are important for the pigmentation of flowers and act as attractants to pollinators. Genes encoding CHS constitute a multigene family in which the copy number varies among plant species and functional divergence appears to have occurred repeatedly. In morning glories (Ipomoea), five functional CHS genes (A-E) have been described. Phylogenetic analysis of the Ipomoea CHS gene family revealed that CHS A, B, and C experienced accelerated rates of amino acid substitution relative to CHS D and E. To examine whether the CHS genes of the morning glories underwent adaptive evolution, maximum-likelihood models of codon substitution were used to analyze the functional sequences in the Ipomoea CHS gene family. These models used the nonsynonymous/synonymous rate ratio (omega = d(N)/ d(S)) as an indicator of selective pressure and allowed the ratio to vary among lineages or sites. Likelihood ratio test suggested significant variation in selection pressure among amino acid sites, with a small proportion of them detected to be under positive selection along the branches ancestral to CHS A, B, and C. Positive Darwinian selection appears to have promoted the divergence of subfamily ABC and subfamily DE and is at least partially responsible for a rate increase following gene duplication.
The Diagnostic Accuracy of Special Tests for Rotator Cuff Tear: The ROW Cohort Study
Jain, Nitin B.; Luz, Jennifer; Higgins, Laurence D.; Dong, Yan; Warner, Jon J.P.; Matzkin, Elizabeth; Katz, Jeffrey N.
2016-01-01
Objective The aim was to assess diagnostic accuracy of 15 shoulder special tests for rotator cuff tears. Design From 02/2011 to 12/2012, 208 participants with shoulder pain were recruited in a cohort study. Results Among tests for supraspinatus tears, Jobe’s test had a sensitivity of 88% (95% CI=80% to 96%), specificity of 62% (95% CI=53% to 71%), and likelihood ratio of 2.30 (95% CI=1.79 to 2.95). The full can test had a sensitivity of 70% (95% CI=59% to 82%) and a specificity of 81% (95% CI=74% to 88%). Among tests for infraspinatus tears, external rotation lag signs at 0° had a specificity of 98% (95% CI=96% to 100%) and a likelihood ratio of 6.06 (95% CI=1.30 to 28.33), and the Hornblower’s sign had a specificity of 96% (95% CI=93% to 100%) and likelihood ratio of 4.81 (95% CI=1.60 to 14.49). Conclusions Jobe’s test and full can test had high sensitivity and specificity for supraspinatus tears and Hornblower’s sign performed well for infraspinatus tears. In general, special tests described for subscapularis tears have high specificity but low sensitivity. These data can be used in clinical practice to diagnose rotator cuff tears and may reduce the reliance on expensive imaging. PMID:27386812
Bayesian Hierarchical Random Effects Models in Forensic Science.
Aitken, Colin G G
2018-01-01
Statistical modeling of the evaluation of evidence with the use of the likelihood ratio has a long history. It dates from the Dreyfus case at the end of the nineteenth century through the work at Bletchley Park in the Second World War to the present day. The development received a significant boost in 1977 with a seminal work by Dennis Lindley which introduced a Bayesian hierarchical random effects model for the evaluation of evidence with an example of refractive index measurements on fragments of glass. Many models have been developed since then. The methods have now been sufficiently well-developed and have become so widespread that it is timely to try and provide a software package to assist in their implementation. With that in mind, a project (SAILR: Software for the Analysis and Implementation of Likelihood Ratios) was funded by the European Network of Forensic Science Institutes through their Monopoly programme to develop a software package for use by forensic scientists world-wide that would assist in the statistical analysis and implementation of the approach based on likelihood ratios. It is the purpose of this document to provide a short review of a small part of this history. The review also provides a background, or landscape, for the development of some of the models within the SAILR package and references to SAILR as made as appropriate.
Pessimistic orientation in relation to telomere length in older men: the VA Normative Aging Study
Ikeda, Ai; Schwartz, Joel; Peters, Junenette L.; Baccarelli, Andrea A.; Hoxha, Mirjam; Dioni, Laura; Spiro, Avron; Sparrow, David; Vokonas, Pantel; Kubzansky, Laura D.
2014-01-01
Background Recent research suggests pessimistic orientation is associated with shorter leukocyte telomere length (LTL). However, this is the first study to look not only at effects of pessimistic orientation on average LTL at multiple time points, but also at effects on the rate of change in LTL over time. Methods Participants were older men from the VA Normative Aging Study (n=490). The Life Orientation Test (LOT) was used to measure optimistic and pessimistic orientations at study baseline, and relative LTL by telomere to single copy gene ratio (T:S ratio) was obtained repeatedly over the course of the study (1999-2008). A total of 1,010 observations were included in the analysis. Linear mixed effect models with a random subject intercept were used to estimate associations. Results Higher pessimistic orientation scores were associated with shorter average LTL (percent difference by 1-SD increase in pessimistic orientation (95% CI): -3.08 (-5.62, -0.46)), and the finding was maintained after adjusting for the higher likelihood that healthier individuals return for follow-up visits (-3.44 (-5.95,-0.86)). However, pessimistic orientation scores were not associated with rate of change in LTL over time. No associations were found between overall optimism and optimistic orientation subscale scores and LTL. Conclusion Higher pessimistic orientation scores were associated with shorter LTL in older men. While there was no evidence that pessimistic orientation was associated with rate of change in LTL over time, higher levels of pessimistic orientation were associated with shorter LTL at baseline and this association persisted over time. PMID:24636503
Automatic classification of retinal vessels into arteries and veins
NASA Astrophysics Data System (ADS)
Niemeijer, Meindert; van Ginneken, Bram; Abràmoff, Michael D.
2009-02-01
Separating the retinal vascular tree into arteries and veins is important for quantifying vessel changes that preferentially affect either the veins or the arteries. For example the ratio of arterial to venous diameter, the retinal a/v ratio, is well established to be predictive of stroke and other cardiovascular events in adults, as well as the staging of retinopathy of prematurity in premature infants. This work presents a supervised, automatic method that can determine whether a vessel is an artery or a vein based on intensity and derivative information. After thinning of the vessel segmentation, vessel crossing and bifurcation points are removed leaving a set of vessel segments containing centerline pixels. A set of features is extracted from each centerline pixel and using these each is assigned a soft label indicating the likelihood that it is part of a vein. As all centerline pixels in a connected segment should be the same type we average the soft labels and assign this average label to each centerline pixel in the segment. We train and test the algorithm using the data (40 color fundus photographs) from the DRIVE database1 with an enhanced reference standard. In the enhanced reference standard a fellowship trained retinal specialist (MDA) labeled all vessels for which it was possible to visually determine whether it was a vein or an artery. After applying the proposed method to the 20 images of the DRIVE test set we obtained an area under the receiver operator characteristic (ROC) curve of 0.88 for correctly assigning centerline pixels to either the vein or artery classes.
Cobb, Nathan K; Mays, Darren; Graham, Amanda L
2013-12-01
Social networks are a prominent component of online smoking cessation interventions. This study applied sentiment analysis-a data processing technique that codes textual data for emotional polarity-to examine how exposure to messages about the cessation drug varenicline affects smokers' decision making around its use. Data were from QuitNet, an online social network dedicated to smoking cessation and relapse prevention. Self-reported medication choice at registration and at 30 days was coded among new QuitNet registrants who participated in at least one forum discussion mentioning varenicline between January 31, 2005 and March 9, 2008. Commercially available software was used to code the sentiment of forum messages mentioning varenicline that occurred during this time frame. Logistic regression analyses examined whether forum message exposure predicted medication choice. The sample of 2132 registrants comprised mostly women (78.3%), white participants (83.4%), averaged 41.2 years of age (SD = 10.9), and smoked on average 21.5 (SD = 9.7) cigarettes/day. After adjusting for potential confounders, as exposure to positive varenicline messages outweighed negative messages, the odds of switching to varenicline (odds ratio = 2.05, 95% confidence interval = 1.66 to 2.54) and continuing to use varenicline (odds ratio = 2.46, 95% confidence interval = 1.96 to 3.10) statistically significantly increased. Sentiment analysis is a useful tool for analyzing text-based data to examine their impact on behavior change. Greater exposure to positive sentiment in online conversations about varenicline is associated with a greater likelihood that smokers will choose to use varenicline in a quit attempt.
van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian
2017-01-01
The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127
Golden, Sean K; Harringa, John B; Pickhardt, Perry J; Ebinger, Alexander; Svenson, James E; Zhao, Ying-Qi; Li, Zhanhai; Westergaard, Ryan P; Ehlenbach, William J; Repplinger, Michael D
2016-07-01
To determine whether clinical scoring systems or physician gestalt can obviate the need for computed tomography (CT) in patients with possible appendicitis. Prospective, observational study of patients with abdominal pain at an academic emergency department (ED) from February 2012 to February 2014. Patients over 11 years old who had a CT ordered for possible appendicitis were eligible. All parameters needed to calculate the scores were recorded on standardised forms prior to CT. Physicians also estimated the likelihood of appendicitis. Test characteristics were calculated using clinical follow-up as the reference standard. Receiver operating characteristic curves were drawn. Of the 287 patients (mean age (range), 31 (12-88) years; 60% women), the prevalence of appendicitis was 33%. The Alvarado score had a positive likelihood ratio (LR(+)) (95% CI) of 2.2 (1.7 to 3) and a negative likelihood ratio (LR(-)) of 0.6 (0.4 to 0.7). The modified Alvarado score (MAS) had LR(+) 2.4 (1.6 to 3.4) and LR(-) 0.7 (0.6 to 0.8). The Raja Isteri Pengiran Anak Saleha Appendicitis (RIPASA) score had LR(+) 1.3 (1.1 to 1.5) and LR(-) 0.5 (0.4 to 0.8). Physician-determined likelihood of appendicitis had LR(+) 1.3 (1.2 to 1.5) and LR(-) 0.3 (0.2 to 0.6). When combined with physician likelihoods, LR(+) and LR(-) was 3.67 and 0.48 (Alvarado), 2.33 and 0.45 (RIPASA), and 3.87 and 0.47 (MAS). The area under the curve was highest for physician-determined likelihood (0.72), but was not statistically significantly different from the clinical scores (RIPASA 0.67, Alvarado 0.72, MAS 0.7). Clinical scoring systems performed equally well as physician gestalt in predicting appendicitis. These scores do not obviate the need for imaging for possible appendicitis when a physician deems it necessary. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Combining evidence using likelihood ratios in writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Kovalenko, Dimitry; Tang, Yi; Ball, Gregory
2013-01-01
Forensic identification is the task of determining whether or not observed evidence arose from a known source. It involves determining a likelihood ratio (LR) - the ratio of the joint probability of the evidence and source under the identification hypothesis (that the evidence came from the source) and under the exclusion hypothesis (that the evidence did not arise from the source). In LR- based decision methods, particularly handwriting comparison, a variable number of input evidences is used. A decision based on many pieces of evidence can result in nearly the same LR as one based on few pieces of evidence. We consider methods for distinguishing between such situations. One of these is to provide confidence intervals together with the decisions and another is to combine the inputs using weights. We propose a new method that generalizes the Bayesian approach and uses an explicitly defined discount function. Empirical evaluation with several data sets including synthetically generated ones and handwriting comparison shows greater flexibility of the proposed method.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
ERIC Educational Resources Information Center
Suh, Youngsuk; Talley, Anna E.
2015-01-01
This study compared and illustrated four differential distractor functioning (DDF) detection methods for analyzing multiple-choice items. The log-linear approach, two item response theory-model-based approaches with likelihood ratio tests, and the odds ratio approach were compared to examine the congruence among the four DDF detection methods.…
PBOOST: a GPU-based tool for parallel permutation tests in genome-wide association studies.
Yang, Guangyuan; Jiang, Wei; Yang, Qiang; Yu, Weichuan
2015-05-01
The importance of testing associations allowing for interactions has been demonstrated by Marchini et al. (2005). A fast method detecting associations allowing for interactions has been proposed by Wan et al. (2010a). The method is based on likelihood ratio test with the assumption that the statistic follows the χ(2) distribution. Many single nucleotide polymorphism (SNP) pairs with significant associations allowing for interactions have been detected using their method. However, the assumption of χ(2) test requires the expected values in each cell of the contingency table to be at least five. This assumption is violated in some identified SNP pairs. In this case, likelihood ratio test may not be applicable any more. Permutation test is an ideal approach to checking the P-values calculated in likelihood ratio test because of its non-parametric nature. The P-values of SNP pairs having significant associations with disease are always extremely small. Thus, we need a huge number of permutations to achieve correspondingly high resolution for the P-values. In order to investigate whether the P-values from likelihood ratio tests are reliable, a fast permutation tool to accomplish large number of permutations is desirable. We developed a permutation tool named PBOOST. It is based on GPU with highly reliable P-value estimation. By using simulation data, we found that the P-values from likelihood ratio tests will have relative error of >100% when 50% cells in the contingency table have expected count less than five or when there is zero expected count in any of the contingency table cells. In terms of speed, PBOOST completed 10(7) permutations for a single SNP pair from the Wellcome Trust Case Control Consortium (WTCCC) genome data (Wellcome Trust Case Control Consortium, 2007) within 1 min on a single Nvidia Tesla M2090 device, while it took 60 min in a single CPU Intel Xeon E5-2650 to finish the same task. More importantly, when simultaneously testing 256 SNP pairs for 10(7) permutations, our tool took only 5 min, while the CPU program took 10 h. By permuting on a GPU cluster consisting of 40 nodes, we completed 10(12) permutations for all 280 SNP pairs reported with P-values smaller than 1.6 × 10⁻¹² in the WTCCC datasets in 1 week. The source code and sample data are available at http://bioinformatics.ust.hk/PBOOST.zip. gyang@ust.hk; eeyu@ust.hk Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Cheng, Juan-Juan; Zhao, Shi-Di; Gao, Ming-Zhu; Huang, Hong-Yu; Gu, Bing; Ma, Ping; Chen, Yan; Wang, Jun-Hong; Yang, Cheng-Jian; Yan, Zi-He
2015-01-01
Background Previous studies have reported that natriuretic peptides in the blood and pleural fluid (PF) are effective diagnostic markers for heart failure (HF). These natriuretic peptides include N-terminal pro-brain natriuretic peptide (NT-proBNP), brain natriuretic peptide (BNP), and midregion pro-atrial natriuretic peptide (MR-proANP). This systematic review and meta-analysis evaluates the diagnostic accuracy of blood and PF natriuretic peptides for HF in patients with pleural effusion. Methods PubMed and EMBASE databases were searched to identify articles published in English that investigated the diagnostic accuracy of BNP, NT-proBNP, and MR-proANP for HF. The last search was performed on 9 October 2014. The quality of the eligible studies was assessed using the revised Quality Assessment of Diagnostic Accuracy Studies tool. The diagnostic performance characteristics (sensitivity, specificity, and other measures of accuracy) were pooled and examined using a bivariate model. Results In total, 14 studies were included in the meta-analysis, including 12 studies reporting the diagnostic accuracy of PF NT-proBNP and 4 studies evaluating blood NT-proBNP. The summary estimates of PF NT-proBNP for HF had a diagnostic sensitivity of 0.94 (95% confidence interval [CI]: 0.90–0.96), specificity of 0.91 (95% CI: 0.86–0.95), positive likelihood ratio of 10.9 (95% CI: 6.4–18.6), negative likelihood ratio of 0.07 (95% CI: 0.04–0.12), and diagnostic odds ratio of 157 (95% CI: 57–430). The overall sensitivity of blood NT-proBNP for diagnosis of HF was 0.92 (95% CI: 0.86–0.95), with a specificity of 0.88 (95% CI: 0.77–0.94), positive likelihood ratio of 7.8 (95% CI: 3.7–16.3), negative likelihood ratio of 0.10 (95% CI: 0.06–0.16), and diagnostic odds ratio of 81 (95% CI: 27–241). The diagnostic accuracy of PF MR-proANP and blood and PF BNP was not analyzed due to the small number of related studies. Conclusions BNP, NT-proBNP, and MR-proANP, either in blood or PF, are effective tools for diagnosis of HF. Additional studies are needed to rigorously evaluate the diagnostic accuracy of PF and blood MR-proANP and BNP for the diagnosis of HF. PMID:26244664
Babafemi, Emmanuel O; Cherian, Benny P; Banting, Lee; Mills, Graham A; Ngianga, Kandala
2017-10-25
Rapid and accurate diagnosis of tuberculosis (TB) is key to manage the disease and to control and prevent its transmission. Many established diagnostic methods suffer from low sensitivity or delay of timely results and are inadequate for rapid detection of Mycobacterium tuberculosis (MTB) in pulmonary and extra-pulmonary clinical samples. This study examined whether a real-time polymerase chain reaction (RT-PCR) assay, with a turn-a-round time of 2 h, would prove effective for routine detection of MTB by clinical microbiology laboratories. A systematic literature search was performed for publications in any language on the detection of MTB in pathological samples by RT-PCR assay. The following sources were used MEDLINE via PubMed, EMBASE, BIOSIS Citation Index, Web of Science, SCOPUS, ISI Web of Knowledge and Cochrane Infectious Diseases Group Specialised Register, grey literature, World Health Organization and Centres for Disease Control and Prevention websites. Forty-six studies met set inclusion criteria. Generated pooled summary estimates (95% CIs) were calculated for overall accuracy and bivariate meta-regression model was used for meta-analysis. Summary estimates for pulmonary TB (31 studies) were as follows: sensitivity 0.82 (95% CI 0.81-0.83), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 43.00 (28.23-64.81), negative likelihood ratio 0.16 (0.12-0.20), diagnostic odds ratio 324.26 (95% CI 189.08-556.09) and area under curve 0.99. Summary estimates for extra-pulmonary TB (25 studies) were as follows: sensitivity 0.70 (95% CI 0.67-0.72), specificity 0.99 (95% CI 0.99-0.99), positive likelihood ratio 29.82 (17.86-49.78), negative likelihood ratio 0.33 (0.26-0.42), diagnostic odds ratio 125.20 (95% CI 65.75-238.36) and area under curve 0.96. RT-PCR assay demonstrated a high degree of sensitivity for pulmonary TB and good sensitivity for extra-pulmonary TB. It indicated a high degree of specificity for ruling in TB infection from sampling regimes. This was acceptable, but may better as a rule out add-on diagnostic test. RT-PCR assays demonstrate both a high degree of sensitivity in pulmonary samples and rapidity of detection of TB which is an important factor in achieving effective global control and for patient management in terms of initiating early and appropriate anti-tubercular therapy. PROSPERO CRD42015027534 .
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
NASA Astrophysics Data System (ADS)
Clark Lesko, Cherish Christina
Active learning methodologies (ALM) are associated with student success, but little research on this topic has been pursued at the community college level. At a local community college, students in science, technology, engineering, and math (STEM) courses exhibited lower than average grades. The purpose of this study was to examine whether the use of ALM predicted STEM course grades while controlling for academic discipline, course level, and class size. The theoretical framework was Vygotsky's social constructivism. Descriptive statistics and multinomial logistic regression were performed on data collected through an anonymous survey of 74 instructors of 272 courses during the 2016 fall semester. Results indicated that students were more likely to achieve passing grades when instructors employed in-class, highly structured activities, and writing-based ALM, and were less likely to achieve passing grades when instructors employed project-based or online ALM. The odds ratios indicated strong positive effects (greater likelihoods of receiving As, Bs, or Cs in comparison to the grade of F) for writing-based ALM (39.1-43.3%, 95% CI [10.7-80.3%]), highly structured activities (16.4-22.2%, 95% CI [1.8-33.7%]), and in-class ALM (5.0-9.0%, 95% CI [0.6-13.8%]). Project-based and online ALM showed negative effects (lower likelihoods of receiving As, Bs, or Cs in comparison to the grade of F) with odds ratios of 15.7-20.9%, 95% CI [9.7-30.6%] and 16.1-20.4%, 95% CI [5.9-25.2%] respectively. A white paper was developed with recommendations for faculty development, computer skills assessment and training, and active research on writing-based ALM. Improving student grades and STEM course completion rates could lead to higher graduation rates and lower college costs for at-risk students by reducing course repetition and time to degree completion.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Chen, Hsin-Hao; Sun, Fang-Ju; Yeh, Tzu-Lin; Liu, Hsueh-Erh; Huang, Hsiu-Li; Kuo, Benjamin Ing-Tiau; Huang, Hsin-Yi
2018-01-01
Abstract Background The prevalence of cognitive impairment is increasing due to the aging population, and early detection is essential clinically. The Ascertain Dementia 8 (AD8) questionnaire is a brief informant-based measure recently developed to assess early cognitive impairment, however, its overall diagnostic performance is controversial. The objective of this meta-analysis was to assess the diagnostic accuracy of the AD8 for cognitive impairment. Methods All relevant studies were collected from databases including MEDLINE, EMBASE and the Cochrane Library up to April 2017. We used QUADAS-2 to assess the methodological quality after the systematic search. The accuracy data and potential confounding variables were extracted from the eligible studies which included those in English and non-English. All analyses were performed using the Midas module in Stata 14.0 and Meta-DiSc 1.4 software. Results Seven relevant studies including 3728 subjects were collected, and classified into two subgroups according to the severity of cognitive impairment. The overall sensitivity (0.72, 0.91) was superior to specificity (0.67, 0.78). The pooled negative likelihood ratio (0.17, 0.13) was better than the positive likelihood ratio (2.52, 3.94). The areas under the summary receiver operating characteristic curve were 0.83 and 0.92, respectively. Meta-regression analysis showed that location (community versus non-community) may be the source of heterogeneity. The average administration time was less than 3 minutes. Conclusion Our findings suggest that the AD8 is a competitive tool for clinically screening cognitive impairment and has an optimal administration time in the busy primary care setting. Subjects with an AD8 score ≧2 should be highly suspected to have cognitive impairment and a further definite diagnosis is needed. PMID:29045636
Mogendi, Joseph Birundu; De Steur, Hans; Gellynck, Xavier; Saeed, Hibbah Araba; Makokha, Anselimo
2015-06-01
Although it is crucial to identify those children likely to be treated in an appropriate nutrition rehabilitation programme and discharge them at the appropriate time, there is no golden standard for such identification. The current study examined the appropriateness of using Mid-Upper Arm Circumference for the identification, follow-up and discharge of malnourished children. We also assessed its discrepancy with the Weight-for-Height based diagnosis, the rate of recovery, and the discharge criteria of the children during nutrition rehabilitation. The study present findings from 156 children (aged 6-59 months) attending a supplementary feeding programme at Makadara and Jericho Health Centres, Eastern District of Nairobi, Kenya. Records of age, weight, height and mid-upper arm circumference were selected at three stages of nutrition rehabilitation: admission, follow-up and discharge. The values obtained were then used to calculate z-scores as defined by WHO Anthro while estimating different diagnostic indices. Mid-upper arm circumference single cut-off (< 12.5 cm) was found to exhibit high values of sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio at both admission and discharge. Besides, children recorded higher rate of recovery at 86 days, an average increment of 0.98 cm at the rate of 0.14mm/day, and a weight gain of 13.49gm/day, albeit higher in female than their male counterparts. Nevertheless, children admitted on basis of low MUAC had a significantly higher MUAC gain than WH at 0.19mm/day and 0.13mm/day respectively. Mid-upper arm circumference can be an appropriate tool for identifying malnourished children for admission to nutrition rehabilitation programs. Our results confirm the appropriateness of this tool for monitoring recovery trends and discharging the children thereafter. In principle the tool has potential to minimize nutrition rehabilitation costs, particularly in community therapeutic centres in developing countries.
Vécsei, Edith; Steinwendner, Stephanie; Kogler, Hubert; Innerhofer, Albina; Hammer, Karin; Haas, Oskar A; Amann, Gabriele; Chott, Andreas; Vogelsang, Harald; Schoenlechner, Regine; Huf, Wolfgang; Vécsei, Andreas
2014-02-13
In diagnosing celiac disease (CD), serological tests are highly valuable. However, their role in following up children with CD after prescription of a gluten-free diet is unclear. This study aimed to compare the performance of antibody tests in predicting small-intestinal mucosal status in diagnosis vs. follow-up of pediatric CD. We conducted a prospective cohort study at a tertiary-care center. 148 children underwent esophohagogastroduodenoscopy with biopsies either for symptoms ± positive CD antibodies (group A; n = 95) or following up CD diagnosed ≥ 1 year before study enrollment (group B; n = 53). Using biopsy (Marsh ≥ 2) as the criterion standard, areas under ROC curves (AUCs) and likelihood-ratios were calculated to estimate the performance of antibody tests against tissue transglutaminase (TG2), deamidated gliadin peptide (DGP) and endomysium (EMA). AUCs were higher when tests were used for CD diagnosis vs. follow-up: 1 vs. 0.86 (P = 0.100) for TG2-IgA, 0.85 vs. 0.74 (P = 0.421) for TG2-IgG, 0.97 vs. 0.61 (P = 0.004) for DPG-IgA, and 0.99 vs. 0.88 (P = 0.053) for DPG-IgG, respectively. Empirical power was 85% for the DPG-IgA comparison, and on average 33% (range 13-43) for the non-significant comparisons. Among group B children, 88.7% showed mucosal healing (median 2.2 years after primary diagnosis). Only the negative likelihood-ratio of EMA was low enough (0.097) to effectively rule out persistent mucosal injury. However, out of 12 EMA-positive children with mucosal healing, 9 subsequently turned EMA-negative. Among the CD antibodies examined, negative EMA most reliably predict mucosal healing. In general, however, antibody tests, especially DPG-IgA, are of limited value in predicting the mucosal status in the early years post-diagnosis but may be sufficient after a longer period of time.
Clinical Diagnosis of Bordetella Pertussis Infection: A Systematic Review.
Ebell, Mark H; Marchello, Christian; Callahan, Maria
2017-01-01
Bordetella pertussis (BP) is a common cause of prolonged cough. Our objective was to perform an updated systematic review of the clinical diagnosis of BP without restriction by patient age. We identified prospective cohort studies of patients with cough or suspected pertussis and assessed study quality using QUADAS-2. We performed bivariate meta-analysis to calculate summary estimates of accuracy and created summary receiver operating characteristic curves to explore heterogeneity by vaccination status and age. Of 381 studies initially identified, 22 met our inclusion criteria, of which 14 had a low risk of bias. The overall clinical impression was the most accurate predictor of BP (positive likelihood ratio [LR+], 3.3; negative likelihood ratio [LR-], 0.63). The presence of whooping cough (LR+, 2.1) and posttussive vomiting (LR+, 1.7) somewhat increased the likelihood of BP, whereas the absence of paroxysmal cough (LR-, 0.58) and the absence of sputum (LR-, 0.63) decreased it. Whooping cough and posttussive vomiting have lower sensitivity in adults. Clinical criteria defined by the Centers for Disease Control and Prevention were sensitive (0.90) but nonspecific. Typical signs and symptoms of BP may be more sensitive but less specific in vaccinated patients. The clinician's overall impression was the most accurate way to determine the likelihood of BP infection when a patient initially presented. Clinical decision rules that combine signs, symptoms, and point-of-care tests have not yet been developed or validated. © Copyright 2017 by the American Board of Family Medicine.
Li, Shi; Mukherjee, Bhramar; Batterman, Stuart; Ghosh, Malay
2013-12-01
Case-crossover designs are widely used to study short-term exposure effects on the risk of acute adverse health events. While the frequentist literature on this topic is vast, there is no Bayesian work in this general area. The contribution of this paper is twofold. First, the paper establishes Bayesian equivalence results that require characterization of the set of priors under which the posterior distributions of the risk ratio parameters based on a case-crossover and time-series analysis are identical. Second, the paper studies inferential issues under case-crossover designs in a Bayesian framework. Traditionally, a conditional logistic regression is used for inference on risk-ratio parameters in case-crossover studies. We consider instead a more general full likelihood-based approach which makes less restrictive assumptions on the risk functions. Formulation of a full likelihood leads to growth in the number of parameters proportional to the sample size. We propose a semi-parametric Bayesian approach using a Dirichlet process prior to handle the random nuisance parameters that appear in a full likelihood formulation. We carry out a simulation study to compare the Bayesian methods based on full and conditional likelihood with the standard frequentist approaches for case-crossover and time-series analysis. The proposed methods are illustrated through the Detroit Asthma Morbidity, Air Quality and Traffic study, which examines the association between acute asthma risk and ambient air pollutant concentrations. © 2013, The International Biometric Society.
[Clinical examination and the Valsalva maneuver in heart failure].
Liniado, Guillermo E; Beck, Martín A; Gimeno, Graciela M; González, Ana L; Cianciulli, Tomás F; Castiello, Gustavo G; Gagliardi, Juan A
2018-01-01
Congestion in heart failure patients with reduced ejection fraction (HFrEF) is relevant and closely linked to the clinical course. Bedside blood pressure measurement during the Valsalva maneuver (Val) added to clinical examination may improve the assessment of congestion when compared to NT-proBNP levels and left atrial pressure (LAP) estimation by Doppler echocardiography, as surrogate markers of congestion in HFrEF. A clinical examination, LAP and blood tests were performed in 69 HFrEF ambulatory patients with left ventricular ejection fraction ≤ 40% and sinus rhythm. Framingham Heart Failure Score (HFS) was used to evaluate clinical congestion; Val was classified as normal or abnormal, NT-proBNP was classified as low (< 1000 pg/ml) or high (≥ 1000 pg/ml) and the ratio between Doppler early mitral inflow and tissue diastolic velocity was used to estimate LAP and was classified as low (E/e'< 15) or high (E/e' ≥ 15). A total of 69 patients with HFrEF were included; 27 had a HFS ≥ 2 and 13 of them had high NT-proBNP. HFS ≥ 2 had a 62% sensitivity, 70% specificity and a positive likelihood ratio of 2.08 (p=0.01) to detect congestion. When Val was added to clinical examination, the presence of a HFS ≥ 2 and abnormal Val showed a 100% sensitivity, 64% specificity and a positive likelihood ratio of 2.8 (p = 0.0004). Compared with LAP, the presence of HFS = 2 and abnormal Val had 86% sensitivity, 54% specificity and a positive likelihood ratio of 1.86 (p = 0.03). In conclusion, an integrated clinical examination with the addition Valsalva maneuver may improve the assessment of congestion in patients with HFrEF.
Blanch, Peter; Gabbett, Tim J
2016-04-01
The return to sport from injury is a difficult multifactorial decision, and risk of reinjury is an important component. Most protocols for ascertaining the return to play status involve assessment of the healing status of the original injury and functional tests which have little proven predictive ability. Little attention has been paid to ascertaining whether an athlete has completed sufficient training to be prepared for competition. Recently, we have completed a series of studies in cricket, rugby league and Australian rules football that have shown that when an athlete's training and playing load for a given week (acute load) spikes above what they have been doing on average over the past 4 weeks (chronic load), they are more likely to be injured. This spike in the acute:chronic workload ratio may be from an unusual week or an ebbing of the athlete's training load over a period of time as in recuperation from injury. Our findings demonstrate a strong predictive (R(2)=0.53) polynomial relationship between acute:chronic workload ratio and injury likelihood. In the elite team setting, it is possible to quantify the loads we are expecting athletes to endure when returning to sport, so assessment of the acute:chronic workload ratio should be included in the return to play decision-making process. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Office manager and nurse perspectives on facilitators of adult immunization.
Nowalk, Mary Patricia; Tabbarah, Melissa; Hart, Jonathan A; Fox, Dwight E; Raymund, Mahlon; Wilson, Stephen A; Zimmerman, Richard K
2009-10-01
To assess which characteristics of primary care practices serving low- to middle-income white and minority patients relate to pneumococcal polysaccharide vaccine (PPV) and influenza vaccination rates. In an intentional sample of 18 primary care practices, PPV and influenza vaccination rates were determined for a sample of 2289 patients >or=65 years old using medical record review. Office managers and lead nurses were surveyed about their office systems for providing adult immunizations, beliefs about PPV and influenza vaccines, and their own vaccination status. Hierarchical linear modeling (HLM) analyses were used to account for the clustered nature of the data. Sampled patients were most frequently female (61%) and white (83%), and averaged 76 years of age. Weighted vaccination rates were 61.1% for PPV and 52.5% for influenza; rates varied by practice. Using HLM, with patient age and race entered as level 1 variables and office factors entered as level 2 variables, time allotted for an annual well visit was associated with a higher likelihood of influenza vaccination (odds ratio [OR] = 1.04; 95% confidence interval [CI] = 1.02, 1.07; P = .003). Nurse influenza vaccination status was associated with a higher likelihood of PPV vaccination (OR = 3.81; 95% CI = 1.49, 9.78; P = .009). In addition to race and age, visit length and the nurses' vaccination status were associated with adult vaccination rates. Quality improvement initiatives for adult vaccination might include strengthening social influence of providers and/or ensuring that adequate time is scheduled for preventive care.
Assessing Success on the Uniform CPA Exam: A Logit Approach.
ERIC Educational Resources Information Center
Brahmasrene, Tantatape; Whitten, Donna
2001-01-01
A logit model was used to test the likelihood of success of 231 candidates on the Uniform Certified Public Accountants Examination. Significant determinants of success included undergraduate grade point average, age, private accounting experience, and gender. (SK)
NASA Astrophysics Data System (ADS)
Aminah, Agustin Siti; Pawitan, Gandhi; Tantular, Bertho
2017-03-01
So far, most of the data published by Statistics Indonesia (BPS) as data providers for national statistics are still limited to the district level. Less sufficient sample size for smaller area levels to make the measurement of poverty indicators with direct estimation produced high standard error. Therefore, the analysis based on it is unreliable. To solve this problem, the estimation method which can provide a better accuracy by combining survey data and other auxiliary data is required. One method often used for the estimation is the Small Area Estimation (SAE). There are many methods used in SAE, one of them is Empirical Best Linear Unbiased Prediction (EBLUP). EBLUP method of maximum likelihood (ML) procedures does not consider the loss of degrees of freedom due to estimating β with β ^. This drawback motivates the use of the restricted maximum likelihood (REML) procedure. This paper proposed EBLUP with REML procedure for estimating poverty indicators by modeling the average of household expenditures per capita and implemented bootstrap procedure to calculate MSE (Mean Square Error) to compare the accuracy EBLUP method with the direct estimation method. Results show that EBLUP method reduced MSE in small area estimation.
Transfer Entropy as a Log-Likelihood Ratio
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Bossomaier, Terry
2012-09-01
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Transfer entropy as a log-likelihood ratio.
Barnett, Lionel; Bossomaier, Terry
2012-09-28
Transfer entropy, an information-theoretic measure of time-directed information transfer between joint processes, has steadily gained popularity in the analysis of complex stochastic dynamics in diverse fields, including the neurosciences, ecology, climatology, and econometrics. We show that for a broad class of predictive models, the log-likelihood ratio test statistic for the null hypothesis of zero transfer entropy is a consistent estimator for the transfer entropy itself. For finite Markov chains, furthermore, no explicit model is required. In the general case, an asymptotic χ2 distribution is established for the transfer entropy estimator. The result generalizes the equivalence in the Gaussian case of transfer entropy and Granger causality, a statistical notion of causal influence based on prediction via vector autoregression, and establishes a fundamental connection between directed information transfer and causality in the Wiener-Granger sense.
Diffuse prior monotonic likelihood ratio test for evaluation of fused image quality measures.
Wei, Chuanming; Kaplan, Lance M; Burks, Stephen D; Blum, Rick S
2011-02-01
This paper introduces a novel method to score how well proposed fused image quality measures (FIQMs) indicate the effectiveness of humans to detect targets in fused imagery. The human detection performance is measured via human perception experiments. A good FIQM should relate to perception results in a monotonic fashion. The method computes a new diffuse prior monotonic likelihood ratio (DPMLR) to facilitate the comparison of the H(1) hypothesis that the intrinsic human detection performance is related to the FIQM via a monotonic function against the null hypothesis that the detection and image quality relationship is random. The paper discusses many interesting properties of the DPMLR and demonstrates the effectiveness of the DPMLR test via Monte Carlo simulations. Finally, the DPMLR is used to score FIQMs with test cases considering over 35 scenes and various image fusion algorithms.
Ebell, Mark H; Jang, Woncheol; Shen, Ye; Geocadin, Romergryko G
2013-11-11
Informing patients and providers of the likelihood of survival after in-hospital cardiac arrest (IHCA), neurologically intact or with minimal deficits, may be useful when discussing do-not-attempt-resuscitation orders. To develop a simple prearrest point score that can identify patients unlikely to survive IHCA, neurologically intact or with minimal deficits. The study included 51,240 inpatients experiencing an index episode of IHCA between January 1, 2007, and December 31, 2009, in 366 hospitals participating in the Get With the Guidelines-Resuscitation registry. Dividing data into training (44.4%), test (22.2%), and validation (33.4%) data sets, we used multivariate methods to select the best independent predictors of good neurologic outcome, created a series of candidate decision models, and used the test data set to select the model that best classified patients as having a very low (<1%), low (1%-3%), average (>3%-15%), or higher than average (>15%) likelihood of survival after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status. The final model was evaluated using the validation data set. Survival to discharge after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status (neurologically intact or with minimal deficits) based on a Cerebral Performance Category score of 1. The best performing model was a simple point score based on 13 prearrest variables. The C statistic was 0.78 when applied to the validation set. It identified the likelihood of a good outcome as very low in 9.4% of patients (good outcome in 0.9%), low in 18.9% (good outcome in 1.7%), average in 54.0% (good outcome in 9.4%), and above average in 17.7% (good outcome in 27.5%). Overall, the score can identify more than one-quarter of patients as having a low or very low likelihood of survival to discharge, neurologically intact or with minimal deficits after IHCA (good outcome in 1.4%). The Good Outcome Following Attempted Resuscitation (GO-FAR) scoring system identifies patients who are unlikely to benefit from a resuscitation attempt should they experience IHCA. This information can be used as part of a shared decision regarding do-not-attempt-resuscitation orders.
Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D
2011-01-01
To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.
[Accuracy of three methods for the rapid diagnosis of oral candidiasis].
Lyu, X; Zhao, C; Yan, Z M; Hua, H
2016-10-09
Objective: To explore a simple, rapid and efficient method for the diagnosis of oral candidiasis in clinical practice. Methods: Totally 124 consecutive patients with suspected oral candidiasis were enrolled from Department of Oral Medicine, Peking University School and Hospital of Stomatology, Beijing, China. Exfoliated cells of oral mucosa and saliva or concentrated oral rinse) obtained from all participants were tested by three rapid smear methods(10% KOH smear, gram-stained smear, Congo red stained smear). The diagnostic efficacy(sensitivity, specificity, Youden's index, likelihood ratio, consistency, predictive value and area under curve(AUC) of each of the above mentioned three methods was assessed by comparing the results with the gold standard(combination of clinical diagnosis, laboratory diagnosis and expert opinion). Results: Gram-stained smear of saliva(or concentrated oral rinse) demonstrated highest sensitivity(82.3%). Test of 10%KOH smear of exfoliated cells showed highest specificity(93.5%). Congo red stained smear of saliva(or concentrated oral rinse) displayed highest diagnostic efficacy(79.0% sensitivity, 80.6% specificity, 0.60 Youden's index, 4.08 positive likelihood ratio, 0.26 negative likelihood ratio, 80% consistency, 80.3% positive predictive value, 79.4% negative predictive value and 0.80 AUC). Conclusions: Test of Congo red stained smear of saliva(or concentrated oral rinse) could be used as a point-of-care tool for the rapid diagnosis of oral candidiasis in clinical practice. Trial registration: Chinese Clinical Trial Registry, ChiCTR-DDD-16008118.
Recognition of depressive symptoms by physicians.
Henriques, Sergio Gonçalves; Fráguas, Renério; Iosifescu, Dan V; Menezes, Paulo Rossi; Lucia, Mara Cristina Souza de; Gattaz, Wagner Farid; Martins, Milton Arruda
2009-01-01
To investigate the recognition of depressive symptoms of major depressive disorder (MDD) by general practitioners. MDD is underdiagnosed in medical settings, possibly because of difficulties in the recognition of specific depressive symptoms. A cross-sectional study of 316 outpatients at their first visit to a teaching general hospital. We evaluated the performance of 19 general practitioners using Primary Care Evaluation of Mental Disorders (PRIME-MD) to detect depressive symptoms and compared them to 11 psychiatrists using Structured Clinical Interview Axis I Disorders, Patient Version (SCID I/P). We measured likelihood ratios, sensitivity, specificity, and false positive and false negative frequencies. The lowest positive likelihood ratios were for psychomotor agitation/retardation (1.6) and fatigue (1.7), mostly because of a high rate of false positive results. The highest positive likelihood ratio was found for thoughts of suicide (8.5). The lowest sensitivity, 61.8%, was found for impaired concentration. The sensitivity for worthlessness or guilt in patients with medical illness was 67.2% (95% CI, 57.4-76.9%), which is significantly lower than that found in patients without medical illness, 91.3% (95% CI, 83.2-99.4%). Less adequately identified depressive symptoms were both psychological and somatic in nature. The presence of a medical illness may decrease the sensitivity of recognizing specific depressive symptoms. Programs for training physicians in the use of diagnostic tools should consider their performance in recognizing specific depressive symptoms. Such procedures could allow for the development of specific training to aid in the detection of the most misrecognized depressive symptoms.
Gallo, Jiri; Juranova, Jarmila; Svoboda, Michal; Zapletalova, Jana
2017-09-01
The aim of this study was to evaluate the characteristics of synovial fluid (SF) white cell count (SWCC) and neutrophil/lymphocyte percentage in the diagnosis of prosthetic joint infection (PJI) for particular threshold values. This was a prospective study of 391 patients in whom SF specimens were collected before total joint replacement revisions. SF was aspirated before joint capsule incision. The PJI diagnosis was based only on non-SF data. Receiver operating characteristic plots were constructed for the SWCC and differential counts of leukocytes in aspirated fluid. Logistic binomic regression was used to distinguish infected and non-infected cases in the combined data. PJI was diagnosed in 78 patients, and aseptic revision in 313 patients. The areas (AUC) under the curve for the SWCC, the neutrophil and lymphocyte percentages were 0.974, 0.962, and 0.951, respectively. The optimal cut-off for PJI was 3,450 cells/μL, 74.6% neutrophils, and 14.6% lymphocytes. Positive likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 19.0, 10.4, and 9.5, respectively. Negative likelihood ratios for the SWCC, neutrophil and lymphocyte percentages were 0.06, 0.076, and 0.092, respectively. Based on AUC, the present study identified cut-off values for the SWCC and differential leukocyte count for the diagnosis of PJI. The likelihood ratio for positive/negative SWCCs can significantly change the pre-test probability of PJI.
Diagnostic accuracy of history and physical examination in bacterial acute rhinosinusitis.
Autio, Timo J; Koskenkorva, Timo; Närkiö, Mervi; Leino, Tuomo K; Koivunen, Petri; Alho, Olli-Pekka
2015-07-01
To evaluate the diagnostic accuracy of symptoms, the symptom progression pattern, and clinical signs in identifying bacterial acute rhinosinusitis (ARS). We conducted an inception cohort study among 50 military recruits with ARS. We collected symptoms daily from the onset of symptoms to approximately 10 days. At 9 to 10 days, standardized data on symptoms and physical findings were gathered. A positive culture of maxillary sinus aspirate was considered to be the reference standard for bacterial ARS. At 9 to 10 days, the presence or deterioration after 5 days of any of the symptoms could not be used to diagnose bacterial ARS. Toothache had an adequate positive likelihood ratio (positive likelihood ratio [LR+] 4.4) but was too rare to be used for screening. In contrast, several physical findings at 9 to 10 days were of more diagnostic use and frequent enough for screening. Moderate or profuse (vs. none/minimal) amount of secretion in nasal passage seen in anterior rhinoscopy satisfactorily either ruled in, if present (LR+ 3.2), or ruled out, if absent (negative likelihood ratio 0.2), bacterial ARS. If any secretion was seen in the posterior pharynx or middle meatus, the probability of bacterial ARS increased markedly (LR+ 5.3 and LR+ 11.0, respectively). We found symptoms or their change to be of little use in identifying bacterial ARS. In contrast, we observed several clinical findings after 9 to 10 days of symptoms to predict bacterial ARS quite accurately. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
The Diagnostic Accuracy of Cytology for the Diagnosis of Hepatobiliary and Pancreatic Cancers.
Al-Hajeili, Marwan; Alqassas, Maryam; Alomran, Astabraq; Batarfi, Bashaer; Basunaid, Bashaer; Alshail, Reem; Alaydarous, Shahad; Bokhary, Rana; Mosli, Mahmoud
2018-06-13
Although cytology testing is considered a valuable method to diagnose tumors that are difficult to access such as hepato-biliary-pancreatic (HBP) malignancies, its diagnostic accuracy remains unclear. We therefore aimed to investigate the diagnostic accuracy of cytology testing for HBP tumors. We performed a retrospective study of all cytology samples that were used to confirm radiologically detected HBP tumors between 2002 and 2016. The cytology techniques used in our center included fine needle aspiration (FNA), brush cytology, and aspiration of bile. Sensitivity, specificity, positive and negative predictive values, and likelihood ratios were calculated in comparison to histological confirmation. From a total of 133 medical records, we calculated an overall sensitivity of 76%, specificity of 74%, a negative likelihood ratio of 0.30, and a positive likelihood ratio of 2.9. Cytology was more accurate in diagnosing lesions of the liver (sensitivity 79%, specificity 57%) and biliary tree (sensitivity 100%, specificity 50%) compared to pancreatic (sensitivity 60%, specificity 83%) and gallbladder lesions (sensitivity 50%, specificity 85%). Cytology was more accurate in detecting primary cancers (sensitivity 77%, specificity 73%) when compared to metastatic cancers (sensitivity 73%, specificity 100%). FNA was the most frequently used cytological technique to diagnose HBP lesions (sensitivity 78.8%). Cytological testing is efficient in diagnosing HBP cancers, especially for hepatobiliary tumors. Given its relative simplicity, cost-effectiveness, and paucity of alternative diagnostic methods, cytology should still be considered as a first-line tool for diagnosing HBP malignancies. © 2018 S. Karger AG, Basel.
Inferring relationships between pairs of individuals from locus heterozygosities
Presciuttini, Silvano; Toni, Chiara; Tempestini, Elena; Verdiani, Simonetta; Casarino, Lucia; Spinetti, Isabella; Stefano, Francesco De; Domenici, Ranieri; Bailey-Wilson, Joan E
2002-01-01
Background The traditional exact method for inferring relationships between individuals from genetic data is not easily applicable in all situations that may be encountered in several fields of applied genetics. This study describes an approach that gives affordable results and is easily applicable; it is based on the probabilities that two individuals share 0, 1 or both alleles at a locus identical by state. Results We show that these probabilities (zi) depend on locus heterozygosity (H), and are scarcely affected by variation of the distribution of allele frequencies. This allows us to obtain empirical curves relating zi's to H for a series of common relationships, so that the likelihood ratio of a pair of relationships between any two individuals, given their genotypes at a locus, is a function of a single parameter, H. Application to large samples of mother-child and full-sib pairs shows that the statistical power of this method to infer the correct relationship is not much lower than the exact method. Analysis of a large database of STR data proves that locus heterozygosity does not vary significantly among Caucasian populations, apart from special cases, so that the likelihood ratio of the more common relationships between pairs of individuals may be obtained by looking at tabulated zi values. Conclusions A simple method is provided, which may be used by any scientist with the help of a calculator or a spreadsheet to compute the likelihood ratios of common alternative relationships between pairs of individuals. PMID:12441003
Kim, T J; Roesler, N M; von dem Knesebeck, O
2017-06-01
Numerous studies have investigated the association between education and overweight/obesity. Yet less is known about the relative importance of causation (i.e. the influence of education on risks of overweight/obesity) and selection (i.e. the influence of overweight/obesity on the likelihood to attain education) hypotheses. A systematic review was performed to assess the linkage between education and overweight/obesity in prospective studies in general populations. Studies were searched within five databases, and study quality was appraised with the Newcastle-Ottawa scale. In total, 31 studies were considered for meta-analysis. Regarding causation (24 studies), the lower educated had a higher likelihood (odds ratio: 1.33, 1.21-1.47) and greater risk (risk ratio: 1.34, 1.08-1.66) for overweight/obesity, when compared with the higher educated. However, these associations were no longer statistically significant when accounting for publication bias. Concerning selection (seven studies), overweight/obese individuals had a greater likelihood of lower education (odds ratio: 1.57, 1.10-2.25), when contrasted with the non-overweight or non-obese. Subgroup analyses were performed by stratifying meta-analyses upon different factors. Relationships between education and overweight/obesity were affected by study region, age groups, gender and observation period. In conclusion, it is necessary to consider both causation and selection processes in order to tackle educational inequalities in obesity appropriately. © 2017 World Obesity Federation.
Childhood self-regulatory skills predict adolescent smoking behavior.
deBlois, Madeleine E; Kubzansky, Laura D
2016-01-01
Cigarette smoking is the primary preventable cause of premature death. Better self-regulatory capacity is a key psychosocial factor that has been linked with reduced likelihood of tobacco use. Studies point to the importance of multiple forms of self-regulation, in the domains of emotion, attention, behavior, and social regulation, although no work has evaluated all of these domains in a single prospective study. Considering those four self-regulation domains separately and in combination, this study prospectively investigated whether greater self-regulation in childhood is associated with reduced likelihood of either trying cigarettes or becoming a regular smoker. Hypotheses were tested using longitudinal data from a cohort of 1709 US children participating in the Panel Study of Income Dynamics--Child Development Supplement. Self-regulation was assessed at study baseline when children ranged in age from 6 to 14 years, using parent-reported measures derived from the Behavior Problems Index and Positive Behavior Scale. Children ages 12-19 self-reported their cigarette smoking, defined in two ways: (1) trying and (2) regular use. Separate multiple logistic regression models were used to evaluate odds of trying or regularly using cigarettes, taking account of various potential confounders. Over an average of five years of follow-up, 34.5% of children ever tried cigarettes and 10.6% smoked regularly. Higher behavioral self-regulation was the only domain associated with reduced odds of trying cigarettes (odds ratio (OR) = .85, 95% confidence interval (CI) = .73-.99). Effective regulation in each of the domains was associated with reduced likelihood of regular smoking, although the association with social regulation was not statistically significant (ORs range .70-.85). For each additional domain in which a child was able to regulate successfully, the odds of becoming a regular smoker dropped by 18% (95% CI = .70-.97). These findings suggest that effective childhood self-regulatory skills across multiple domains may reduce future health risk behaviors.
Clarke, Shannon M.; Henry, Hannah M.; Dodds, Ken G.; Jowett, Timothy W. D.; Manley, Tim R.; Anderson, Rayna M.; McEwan, John C.
2014-01-01
Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs) are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development- firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage) are assigned. An 84 “parentage SNP panel” was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams) was absent, highlighting the SNP test’s suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry. PMID:24740141
Clarke, Shannon M; Henry, Hannah M; Dodds, Ken G; Jowett, Timothy W D; Manley, Tim R; Anderson, Rayna M; McEwan, John C
2014-01-01
Accurate pedigree information is critical to animal breeding systems to ensure the highest rate of genetic gain and management of inbreeding. The abundance of available genomic data, together with development of high throughput genotyping platforms, means that single nucleotide polymorphisms (SNPs) are now the DNA marker of choice for genomic selection studies. Furthermore the superior qualities of SNPs compared to microsatellite markers allows for standardization between laboratories; a property that is crucial for developing an international set of markers for traceability studies. The objective of this study was to develop a high throughput SNP assay for use in the New Zealand sheep industry that gives accurate pedigree assignment and will allow a reduction in breeder input over lambing. This required two phases of development--firstly, a method of extracting quality DNA from ear-punch tissue performed in a high throughput cost efficient manner and secondly a SNP assay that has the ability to assign paternity to progeny resulting from mob mating. A likelihood based approach to infer paternity was used where sires with the highest LOD score (log of the ratio of the likelihood given parentage to likelihood given non-parentage) are assigned. An 84 "parentage SNP panel" was developed that assigned, on average, 99% of progeny to a sire in a problem where there were 3,000 progeny from 120 mob mated sires that included numerous half sib sires. In only 6% of those cases was there another sire with at least a 0.02 probability of paternity. Furthermore dam information (either recorded, or by genotyping possible dams) was absent, highlighting the SNP test's suitability for paternity testing. Utilization of this parentage SNP assay will allow implementation of progeny testing into large commercial farms where the improved accuracy of sire assignment and genetic evaluations will increase genetic gain in the sheep industry.
Agrawal, Swati; Cerdeira, Ana Sofia; Redman, Christopher; Vatish, Manu
2018-02-01
Preeclampsia is a major cause of morbidity and mortality worldwide. Numerous candidate biomarkers have been proposed for diagnosis and prediction of preeclampsia. Measurement of maternal circulating angiogenesis biomarker as the ratio of sFlt-1 (soluble FMS-like tyrosine kinase-1; an antiangiogenic factor)/PlGF (placental growth factor; an angiogenic factor) reflects the antiangiogenic balance that characterizes incipient or overt preeclampsia. The ratio increases before the onset of the disease and thus may help in predicting preeclampsia. We conducted a meta-analysis to explore the predictive accuracy of sFlt-1/PlGF ratio in preeclampsia. We included 15 studies with 534 cases with preeclampsia and 19 587 controls. The ratio has a pooled sensitivity of 80% (95% confidence interval, 0.68-0.88), specificity of 92% (95% confidence interval, 0.87-0.96), positive likelihood ratio of 10.5 (95% confidence interval, 6.2-18.0), and a negative likelihood ratio of 0.22 (95% confidence interval, 0.13-0.35) in predicting preeclampsia in both high- and low-risk patients. Most of the studies have not made a distinction between early- and late-onset disease, and therefore, the analysis for it could not be done. It can prove to be a valuable screening tool for preeclampsia and may also help in decision-making, treatment stratification, and better resource allocation. © 2017 American Heart Association, Inc.
A spatially explicit capture-recapture estimator for single-catch traps.
Distiller, Greg; Borchers, David L
2015-11-01
Single-catch traps are frequently used in live-trapping studies of small mammals. Thus far, a likelihood for single-catch traps has proven elusive and usually the likelihood for multicatch traps is used for spatially explicit capture-recapture (SECR) analyses of such data. Previous work found the multicatch likelihood to provide a robust estimator of average density. We build on a recently developed continuous-time model for SECR to derive a likelihood for single-catch traps. We use this to develop an estimator based on observed capture times and compare its performance by simulation to that of the multicatch estimator for various scenarios with nonconstant density surfaces. While the multicatch estimator is found to be a surprisingly robust estimator of average density, its performance deteriorates with high trap saturation and increasing density gradients. Moreover, it is found to be a poor estimator of the height of the detection function. By contrast, the single-catch estimators of density, distribution, and detection function parameters are found to be unbiased or nearly unbiased in all scenarios considered. This gain comes at the cost of higher variance. If there is no interest in interpreting the detection function parameters themselves, and if density is expected to be fairly constant over the survey region, then the multicatch estimator performs well with single-catch traps. However if accurate estimation of the detection function is of interest, or if density is expected to vary substantially in space, then there is merit in using the single-catch estimator when trap saturation is above about 60%. The estimator's performance is improved if care is taken to place traps so as to span the range of variables that affect animal distribution. As a single-catch likelihood with unknown capture times remains intractable for now, researchers using single-catch traps should aim to incorporate timing devices with their traps.
Quasar microlensing models with constraints on the Quasar light curves
NASA Astrophysics Data System (ADS)
Tie, S. S.; Kochanek, C. S.
2018-01-01
Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.
Improved measurement of the form factors in the decay lambda+c-->lambda + nue.
Hinson, J W; Huang, G S; Lee, J; Miller, D H; Pavlunin, V; Rangarajan, R; Sanghi, B; Shibata, E I; Shipsey, I P J; Cronin-Hennessy, D; Park, C S; Park, W; Thayer, J B; Thorndike, E H; Coan, T E; Gao, Y S; Liu, F; Stroynowski, R; Artuso, M; Boulahouache, C; Blusk, S; Dambasuren, E; Dorjkhaidav, O; Mountain, R; Muramatsu, H; Nandakumar, R; Skwarnicki, T; Stone, S; Wang, J C; Csorna, S E; Danko, I; Bonvicini, G; Cinabro, D; Dubrovin, M; McGee, S; Bornheim, A; Lipeles, E; Pappas, S P; Shapiro, A; Sun, W M; Weinstein, A J; Briere, R A; Chen, G P; Ferguson, T; Tatishvili, G; Vogel, H; Watkins, M E; Adam, N E; Alexander, J P; Berkelman, K; Boisvert, V; Cassel, D G; Duboscq, J E; Ecklund, K M; Ehrlich, R; Galik, R S; Gibbons, L; Gittelman, B; Gray, S W; Hartill, D L; Heltsley, B K; Hsu, L; Jones, C D; Kandaswamy, J; Kreinick, D L; Magerkurth, A; Mahlke-Krüger, H; Meyer, T O; Mistry, N B; Patterson, J R; Peterson, D; Pivarski, J; Richichi, S J; Riley, D; Sadoff, A J; Schwarthoff, H; Shepherd, M R; Thayer, J G; Urner, D; Wilksen, T; Warburton, A; Weinberger, M; Athar, S B; Avery, P; Breva-Newell, L; Potlia, V; Stoeck, H; Yelton, J; Benslama, K; Cawlfield, C; Eisenstein, B I; Gollin, G D; Karliner, I; Lowrey, N; Plager, C; Sedlack, C; Selen, M; Thaler, J J; Williams, J; Edwards, K W; Besson, D; Anderson, S; Frolov, V V; Gong, D T; Kubota, Y; Li, S Z; Poling, R; Smith, A; Stepaniak, C J; Urheim, J; Metreveli, Z; Seth, K K; Tomaradze, A; Zweber, P; Ahmed, S; Alam, M S; Ernst, J; Jian, L; Saleem, M; Wappler, F; Arms, K; Eckhart, E; Gan, K K; Gwon, C; Honscheid, K; Kagan, H; Kass, R; Pedlar, T K; von Toerne, E; Severini, H; Skubic, P; Dytman, S A; Mueller, J A; Nam, S; Savinov, V
2005-05-20
Using the CLEO detector at the Cornell Electron Storage Ring, we have studied the distribution of kinematic variables in the decay lambda(+)(c)lambda--> e(+)nu(e). By performing a four-dimensional maximum likelihood fit, we determine the form factor ratio, R= f(2)/f(1) = -0.31 +/- 0.05(stat) +/- 0.04(syst), the pole mass, M(pole) = [2.21 +/- 0.08(stat) +/- 0.14(syst)] GeV/c(2), and the decay asymmetry parameter of the lambda(+)(c), alpha (lambda(c)) = -0.86 +/-0.03(stat) +/- 0.02(syst), for q(2) = 0.67 (GeV/c(2))(2). We compare the angular distributions of the lambda(+)(c) and lambda(-)(c) and find no evidence for CP violation: A(lambda(c)) = (alpha(lambda(c)) + alpha (lambda(c)))/(alpha(lambda(c))-alpha(lambda(c))) = 0.00 +/- 0.03(stat) +/- 0.01(syst) +/- 0.02, where the third error is from the uncertainty in the world average of the CP-violating parameter, A(lambda), for ppi(-).
NASA Technical Reports Server (NTRS)
Quir, Kevin J.; Gin, Jonathan W.; Nguyen, Danh H.; Nguyen, Huy; Nakashima, Michael A.; Moision, Bruce E.
2012-01-01
A decoder was developed that decodes a serial concatenated pulse position modulation (SCPPM) encoded information sequence. The decoder takes as input a sequence of four bit log-likelihood ratios (LLR) for each PPM slot in a codeword via a XAUI 10-Gb/s quad optical fiber interface. If the decoder is unavailable, it passes the LLRs on to the next decoder via a XAUI 10-Gb/s quad optical fiber interface. Otherwise, it decodes the sequence and outputs information bits through a 1-GB/s Ethernet UDP/IP (User Datagram Protocol/Internet Protocol) interface. The throughput for a single decoder unit is 150-Mb/s at an average of four decoding iterations; by connecting a number of decoder units in series, a decoding rate equal to that of the aggregate rate is achieved. The unit is controlled through a 1-GB/s Ethernet UDP/IP interface. This ground station decoder was developed to demonstrate a deep space optical communication link capability, and is unique in the scalable design to achieve real-time SCPP decoding at the aggregate data rate.
ML Frame Synchronization for OFDM Systems Using a Known Pilot and Cyclic Prefixes
NASA Astrophysics Data System (ADS)
Huh, Heon
Orthogonal frequency-division multiplexing (OFDM) is a popular air interface technology that is adopted as a standard modulation scheme for 4G communication systems owing to its excellent spectral efficiency. For OFDM systems, synchronization problems have received much attention along with peak-to-average power ratio (PAPR) reduction. In addition to frequency offset estimation, frame synchronization is a challenging problem that must be solved to achieve optimal system performance. In this paper, we present a maximum likelihood (ML) frame synchronizer for OFDM systems. The synchronizer exploits a synchronization word and cyclic prefixes together to improve the synchronization performance. Numerical results show that the performance of the proposed frame synchronizer is better than that of conventional schemes. The proposed synchronizer can be used as a reference for evaluating the performance of other suboptimal frame synchronizers. We also modify the proposed frame synchronizer to reduce the implementation complexity and propose a near-ML synchronizer for time-varying fading channels.
Constructing STR multiplexes for individual identification of Hungarian red deer.
Szabolcsi, Zoltan; Egyed, Balazs; Zenke, Petra; Padar, Zsolt; Borsy, Adrienn; Steger, Viktor; Pasztor, Erzsebet; Csanyi, Sandor; Buzas, Zsuzsanna; Orosz, Laszlo
2014-07-01
Red deer is the most valuable game of the fauna in Hungary, and there is a strong need for genetic identification of individuals. For this purpose, 10 tetranucleotide STR markers were developed and amplified in two 5-plex systems. The study presented here includes the flanking region sequence analysis and the allele nomenclature of the 10 loci as well as the PCR optimization of the DeerPlex I and II. LD pairwise tests and cross-species similarity analyses showed the 10 loci to be independently inherited. Considerable levels of genetic differences between two subpopulations were recorded, and F(ST) was 0.034 using AMOVA. The average probability of identity (PI(ave)) was at the value of 2.6736 × 10(-15). This low value for PI(ave) nearly eliminates false identification. An illegal hunting case solved by DeerPlex is described herein. The calculated likelihood ratio (LR) illustrates the potential of the 10 red deer microsatellite markers for forensic investigations. © 2014 American Academy of Forensic Sciences.
A maximum likelihood convolutional decoder model vs experimental data comparison
NASA Technical Reports Server (NTRS)
Chen, R. Y.
1979-01-01
This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.
Shen, Yongchun; Pang, Caishuang; Wu, Yanqiu; Li, Diandian; Wan, Chun; Liao, Zenglin; Yang, Ting; Chen, Lei; Wen, Fuqiang
2016-06-01
The usefulness of bronchoalveolar lavage fluid (BALF) CD4/CD8 ratio for diagnosing sarcoidosis has been reported in many studies with variable results. Therefore, we performed a meta-analysis to estimate the overall diagnostic accuracy of BALF CD4/CD8 ratio based on the bulk of published evidence. Studies published prior to June 2015 and indexed in PubMed, OVID, Web of Science, Scopus and other databases were evaluated for inclusion. Data on sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were pooled from included studies. Summary receiver operating characteristic (SROC) curves were used to summarize overall test performance. Deeks's funnel plot was used to detect publication bias. Sixteen publications with 1885 subjects met our inclusion criteria and were included in this meta-analysis. Summary estimates of the diagnostic performance of the BALF CD4/CD8 ratio were as follows: sensitivity, 0.70 (95%CI 0.64-0.75); specificity, 0.83 (95%CI 0.78-0.86); PLR, 4.04 (95%CI 3.13-5.20); NLR, 0.36 (95%CI 0.30-0.44); and DOR, 11.17 (95%CI 7.31-17.07). The area under the SROC curve was 0.84 (95%CI 0.81-0.87). There was no evidence of publication bias. Measuring the BALF CD4/CD8 ratio may assist in the diagnosis of sarcoidosis when interpreted in parallel with other diagnostic factors. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Correlation of diffusion and perfusion MRI with Ki-67 in high-grade meningiomas.
Ginat, Daniel T; Mangla, Rajiv; Yeaney, Gabrielle; Wang, Henry Z
2010-12-01
Atypical and anaplastic meningiomas have a greater likelihood of recurrence than benign meningiomas. The risk for recurrence is often estimated using the Ki-67 labeling index. The purpose of this study was to determine the correlation between Ki-67 and regional cerebral blood volume (rCBV) and between Ki-67 and apparent diffusion coefficient (ADC) in atypical and anaplastic meningiomas. A retrospective review of the advanced imaging and immunohistochemical characteristics of atypical and anaplastic meningiomas was performed. The relative minimum ADC, relative maximum rCBV, and specimen Ki-67 index were measured. Pearson's correlation was used to compare these parameters. There were 23 cases with available ADC maps and 20 cases with available rCBV maps. The average Ki-67 among the cases with ADC maps and rCBV maps was 17.6% (range, 5-38%) and 16.7% (range, 3-38%), respectively. The mean minimum ADC ratio was 0.91 (SD, 0.26) and the mean maximum rCBV ratio was 22.5 (SD, 7.9). There was a significant positive correlation between maximum rCBV and Ki-67 (Pearson's correlation, 0.69; p = 0.00038). However, there was no significant correlation between minimum ADC and Ki-67 (Pearson's correlation, -0.051; p = 0.70). Maximum rCBV correlated significantly with Ki-67 in high-grade meningiomas.
Use of Midlevel Practitioners to Achieve Labor Cost Savings in the Primary Care Practice of an MCO
Roblin, Douglas W; Howard, David H; Becker, Edmund R; Kathleen Adams, E; Roberts, Melissa H
2004-01-01
Objective To estimate the savings in labor costs per primary care visit that might be realized from increased use of physician assistants (PAs) and nurse practitioners (NPs) in the primary care practices of a managed care organization (MCO). Study Setting/Data Sources Twenty-six capitated primary care practices of a group model MCO. Data on approximately two million visits provided by 206 practitioners were extracted from computerized visit records for 1997–2000. Computerized payroll ledgers were the source of annual labor costs per practice from 1997–2000. Study Design Likelihood of a visit attended by a PA/NP versus MD was modeled using logistic regression, with practice fixed effects, by department (adult medicine, pediatrics) and year. Parameter estimates and practice fixed effects from these regressions were used to predict the proportion of PA/NP visits per practice per year given a standard case mix. Least squares regressions, with practice fixed effects, were used to estimate the association of this standardized predicted proportion of PA/NP visits with average annual practitioner and total labor costs per visit, controlling for other practice characteristics. Results On average, PAs/NPs attended one in three adult medicine visits and one in five pediatric medicine visits. Likelihood of a PA/NP visit was significantly higher than average among patients presenting with minor acute illness (e.g., acute pharyngitis). In adult medicine, likelihood of a PA/NP visit was lower than average among older patients. Practitioner labor costs per visit and total labor costs per visit were lower (p<.01 and p=.08, respectively) among practices with greater use of PAs/NPs, standardized for case mix. Conclusions Primary care practices that used more PAs/NPs in care delivery realized lower practitioner labor costs per visit than practices that used less. Future research should investigate the cost savings and cost-effectiveness potential of delivery designs that change staffing mix and division of labor among clinical disciplines. PMID:15149481
Rodriguez, Carlos J; Jin, Zhezhen; Schwartz, Joseph E; Turner-Lloveras, Daniel; Sacco, Ralph L; Di Tullio, Marco R; Homma, Shunichi
2013-05-01
Little information is available about the relationship of socioeconomic status (SES) to blunted nocturnal ambulatory blood pressure (ABP) dipping among Hispanics and whether this relationship differs by race. We sought to characterize ABP nondipping and its determinants in a sample of Hispanics. We enrolled 180 Hispanic participants not on antihypertensive medications. SES was defined by years of educational attainment. All participants underwent 24-hour ABP monitoring. A decrease of <10% in the ratio between average awake and average asleep systolic BP was considered nondipping. The mean age of the cohort was 67.1 ± 8.7, mean educational level was 9.4 ± 4.4 years, and 58.9% of the cohort was female. The cohort was comprised of 78.3% Caribbean Hispanics with the rest from Mexico and Central/South America; 41.4% self-identified as white Hispanic, 34.4% self-identified as black Hispanic, and 24.4% did not racially self- identify. The percentage of nondippers was 57.8%. Educational attainment (10.5 years vs. 8.6 years; P <0.01) was significantly higher among dippers than nondippers. In multivariable analyses, each 1-year increase in education was associated with a 9% reduction in the likelihood of being a nondipper (odds ratio [OR], 0.91; 95% confidence interval [CI], 0.84-0.98; P = 0.01). There were significantly greater odds of being a nondipper for black Hispanics than for white Hispanics (OR, 2.83, 95% CI, 1.29-6.23; P = 0.005). Higher SES was significantly protective of nondipping in white Hispanics but not black Hispanics. These results document a substantial prevalence of nondipping in a cohort of predominantly normotensive Hispanics. Dipping status varied significantly by race. Lower SES is significantly associated with nondipping status, and race potentially impacts on this relation.
A proposed selection index for feedlot profitability based on estimated breeding values.
van der Westhuizen, R R; van der Westhuizen, J
2009-04-22
It is generally accepted that feed intake and growth (gain) are the most important economic components when calculating profitability in a growth test or feedlot. We developed a single post-weaning growth (feedlot) index based on the economic values of different components. Variance components, heritabilities and genetic correlations for and between initial weight (IW), final weight (FW), feed intake (FI), and shoulder height (SHD) were estimated by multitrait restricted maximum likelihood procedures. The estimated breeding values (EBVs) and the economic values for IW, FW and FI were used in a selection index to estimate a post-weaning or feedlot profitability value. Heritabilities for IW, FW, FI, and SHD were 0.41, 0.40, 0.33, and 0.51, respectively. The highest genetic correlations were 0.78 (between IW and FW) and 0.70 (between FI and FW). EBVs were used in a selection index to calculate a single economical value for each animal. This economic value is an indication of the gross profitability value or the gross test value (GTV) of the animal in a post-weaning growth test. GTVs varied between -R192.17 and R231.38 with an average of R9.31 and a standard deviation of R39.96. The Pearson correlations between EBVs (for production and efficiency traits) and GTV ranged from -0.51 to 0.68. The lowest correlation (closest to zero) was 0.26 between the Kleiber ratio and GTV. Correlations of 0.68 and -0.51 were estimated between average daily gain and GTV and feed conversion ratio and GTV, respectively. These results showed that it is possible to select for GTV. The selection index can benefit feedlotting in selecting offspring of bulls with high GTVs to maximize profitability.
Haughton, Jannett; Gregorio, David; Pérez-Escamilla, Rafael
2011-01-01
This retrospective study aimed to identify factors associated with breastfeeding duration among women enrolled in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) of Hartford, Connecticut. The authors included mothers whose children were younger than 5 years and had stopped breastfeeding (N = 155). Women who had planned their pregnancies were twice as likely as those who did not plan them to breastfeed for more than 6 months (odds ratio, 2.15; 95% confidence interval, 1.00–4.64). One additional year of maternal age was associated with a 9% increase on the likelihood of breastfeeding for more than 6 months (odds ratio, 1.09; 95% confidence interval, 1.02–1.17). Time in the United States was inversely associated with the likelihood of breastfeeding for more than 6 months (odds ratio, 0.96; 95% confidence interval, 0.92–0.99). Return to work, sore nipples, lack of access to breast pumps, and free formula provided by WIC were identified as breastfeeding barriers. Findings can help WIC improve its breastfeeding promotion efforts. PMID:20689103
Averaged kick maps: less noise, more signal…and probably less bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pražnikar, Jure; Afonine, Pavel V.; Gunčar, Gregor
2009-09-01
Averaged kick maps are the sum of a series of individual kick maps, where each map is calculated from atomic coordinates modified by random shifts. These maps offer the possibility of an improved and less model-biased map interpretation. Use of reliable density maps is crucial for rapid and successful crystal structure determination. Here, the averaged kick (AK) map approach is investigated, its application is generalized and it is compared with other map-calculation methods. AK maps are the sum of a series of kick maps, where each kick map is calculated from atomic coordinates modified by random shifts. As such, theymore » are a numerical analogue of maximum-likelihood maps. AK maps can be unweighted or maximum-likelihood (σ{sub A}) weighted. Analysis shows that they are comparable and correspond better to the final model than σ{sub A} and simulated-annealing maps. The AK maps were challenged by a difficult structure-validation case, in which they were able to clarify the problematic region in the density without the need for model rebuilding. The conclusion is that AK maps can be useful throughout the entire progress of crystal structure determination, offering the possibility of improved map interpretation.« less
Grosu, Horiana B; Vial-Rodriguez, Macarena; Vakil, Erik; Casal, Roberto F; Eapen, George A; Morice, Rodolfo; Stewart, John; Sarkiss, Mona G; Ost, David E
2017-08-01
During diagnostic thoracoscopy, talc pleurodesis after biopsy is appropriate if the probability of malignancy is sufficiently high. Findings on direct visual assessment of the pleura during thoracoscopy, rapid onsite evaluation (ROSE) of touch preparations (touch preps) of thoracoscopic biopsy specimens, and preoperative imaging may help predict the likelihood of malignancy; however, data on the performance of these methods are limited. To assess the performance of ROSE of touch preps, direct visual assessment of the pleura during thoracoscopy, and preoperative imaging in diagnosing malignancy. Patients who underwent ROSE of touch preps during thoracoscopy for suspected malignancy were retrospectively reviewed. Malignancy was diagnosed on the basis of final pathologic examination of pleural biopsy specimens. ROSE results were categorized as malignant, benign, or atypical cells. Visual assessment results were categorized as tumor studding present or absent. Positron emission tomography (PET) and computed tomography (CT) findings were categorized as abnormal or normal pleura. Likelihood ratios were calculated for each category of test result. The study included 44 patients, 26 (59%) with a final pathologic diagnosis of malignancy. Likelihood ratios were as follows: for ROSE of touch preps: malignant, 1.97 (95% confidence interval [CI], 0.90-4.34); atypical cells, 0.69 (95% CI, 0.21-2.27); benign, 0.11 (95% CI, 0.01-0.93); for direct visual assessment: tumor studding present, 3.63 (95% CI, 1.32-9.99); tumor studding absent, 0.24 (95% CI, 0.09-0.64); for PET: abnormal pleura, 9.39 (95% CI, 1.42-62); normal pleura, 0.24 (95% CI, 0.11-0.52); and for CT: abnormal pleura, 13.15 (95% CI, 1.93-89.63); normal pleura, 0.28 (95% CI, 0.15-0.54). A finding of no malignant cells on ROSE of touch preps during thoracoscopy lowers the likelihood of malignancy significantly, whereas finding of tumor studding on direct visual assessment during thoracoscopy only moderately increases the likelihood of malignancy. A positive finding on PET and/or CT increases the likelihood of malignancy significantly in a moderate-risk patient group and can be used as an adjunct to predict malignancy before pleurodesis.
Statistical inference for tumor growth inhibition T/C ratio.
Wu, Jianrong
2010-09-01
The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.
Bayesian framework for the evaluation of fiber evidence in a double murder--a case report.
Causin, Valerio; Schiavone, Sergio; Marigo, Antonio; Carresi, Pietro
2004-05-10
Fiber evidence found on a suspect vehicle was the only useful trace to reconstruct the dynamics of the transportation of two corpses. Optical microscopy, UV-Vis microspectrophotometry and infrared analysis were employed to compare fibers recovered in the trunk of a car to those of the blankets composing the wrapping in which the victims had been hidden. A "pseudo-1:1" taping permitted to reconstruct the spatial distribution of the traces and to further strengthen the support to one of the hypotheses. The Likelihood Ratio (LR) was calculated, in order to quantify the support given by forensic evidence to the explanations proposed. A generalization of the Likelihood Ratio equation to cases analogous to this has been derived. Fibers were the only traces that helped in the corroboration of the crime scenario, being absent any DNA, fingerprints and ballistic evidence.
Variance change point detection for fractional Brownian motion based on the likelihood ratio test
NASA Astrophysics Data System (ADS)
Kucharczyk, Daniel; Wyłomańska, Agnieszka; Sikora, Grzegorz
2018-01-01
Fractional Brownian motion is one of the main stochastic processes used for describing the long-range dependence phenomenon for self-similar processes. It appears that for many real time series, characteristics of the data change significantly over time. Such behaviour one can observe in many applications, including physical and biological experiments. In this paper, we present a new technique for the critical change point detection for cases where the data under consideration are driven by fractional Brownian motion with a time-changed diffusion coefficient. The proposed methodology is based on the likelihood ratio approach and represents an extension of a similar methodology used for Brownian motion, the process with independent increments. Here, we also propose a statistical test for testing the significance of the estimated critical point. In addition to that, an extensive simulation study is provided to test the performance of the proposed method.
Man, Wanrong; Hu, Jianqiang; Zhao, Zhijing; Zhang, Mingming; Wang, Tingting; Lin, Jie; Duan, Yu; Wang, Ling; Wang, Haichang; Sun, Dongdong; Li, Yan
2016-09-01
The instantaneous wave-free ratio (iFR) is a new vasodilator-free index of coronary stenosis severity. The aim of this meta-analysis is to assess the diagnostic performance of iFR for the evaluation of coronary stenosis severity with fractional flow reserve as standard reference. We searched PubMed, EMBASE, CENTRAL, ProQuest, Web of Science, and International Clinical Trials Registry Platform (ICTRP) for publications concerning the diagnostic value of iFR. We used a random-effects covariate to synthesize the available data of sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), and diagnostic odds ratio (DOR). Overall test performance was summarized by the summary receiver operating characteristic curve (sROC) and the area under the curve (AUC). Eight studies with 1611 subjects were included in the meta-analysis. The pooled sensitivity, specificity, LR+, LR-, and DOR for iFR were respectively 73.3% (70.1-76.2%), 86.4% (84.3-88.3%), 5.71 (4.43-7.37), 0.29 (0.22-0.38), and 20.54 (16.11-26.20). The area under the summary receiver operating characteristic curves for iFR was 0.8786. No publication bias was identified. The available evidence suggests that iFR may be a new, simple, and promising technology for coronary stenosis physiological assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
NASA Astrophysics Data System (ADS)
Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel
2018-04-01
We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).
Multibaseline gravitational wave radiometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talukder, Dipongkar; Bose, Sukanta; Mitra, Sanjit
2011-03-15
We present a statistic for the detection of stochastic gravitational wave backgrounds (SGWBs) using radiometry with a network of multiple baselines. We also quantitatively compare the sensitivities of existing baselines and their network to SGWBs. We assess how the measurement accuracy of signal parameters, e.g., the sky position of a localized source, can improve when using a network of baselines, as compared to any of the single participating baselines. The search statistic itself is derived from the likelihood ratio of the cross correlation of the data across all possible baselines in a detector network and is optimal in Gaussian noise.more » Specifically, it is the likelihood ratio maximized over the strength of the SGWB and is called the maximized-likelihood ratio (MLR). One of the main advantages of using the MLR over past search strategies for inferring the presence or absence of a signal is that the former does not require the deconvolution of the cross correlation statistic. Therefore, it does not suffer from errors inherent to the deconvolution procedure and is especially useful for detecting weak sources. In the limit of a single baseline, it reduces to the detection statistic studied by Ballmer [Classical Quantum Gravity 23, S179 (2006).] and Mitra et al.[Phys. Rev. D 77, 042002 (2008).]. Unlike past studies, here the MLR statistic enables us to compare quantitatively the performances of a variety of baselines searching for a SGWB signal in (simulated) data. Although we use simulated noise and SGWB signals for making these comparisons, our method can be straightforwardly applied on real data.« less
Accuracy of Urine Color to Detect Equal to or Greater Than 2% Body Mass Loss in Men.
McKenzie, Amy L; Muñoz, Colleen X; Armstrong, Lawrence E
2015-12-01
Clinicians and athletes can benefit from field-expedient measurement tools, such as urine color, to assess hydration state; however, the diagnostic efficacy of this tool has not been established. To determine the diagnostic accuracy of urine color assessment to distinguish a hypohydrated state (≥2% body mass loss [BML]) from a euhydrated state (<2% BML) after exercise in a hot environment. Controlled laboratory study. Environmental chamber in a laboratory. Twenty-two healthy men (age = 22 ± 3 years, height = 180.4 ± 8.7 cm, mass = 77.9 ± 12.8 kg, body fat = 10.6% ± 4.6%). Participants cycled at 68% ± 6% of their maximal heart rates in a hot environment (36°C ± 1°C) for 5 hours or until 5% BML was achieved. At the point of each 1% BML, we assessed urine color. Diagnostic efficacy of urine color was assessed using receiver operating characteristic curve analysis, sensitivity, specificity, and likelihood ratios. Urine color was useful as a diagnostic tool to identify hypohydration after exercise in the heat (area under the curve = 0.951, standard error = 0.022; P < .001). A urine color of 5 or greater identified BML ≥2% with 88.9% sensitivity and 84.8% specificity (positive likelihood ratio = 5.87, negative likelihood ratio = 0.13). Under the conditions of acute dehydration due to exercise in a hot environment, urine color assessment can be a valid, practical, inexpensive tool for assessing hydration status. Researchers should examine the utility of urine color to identify a hypohydrated state under different BML conditions.
Wong, W N; Sek, Antonio C H; Lau, Rick F L; Li, K M; Leung, Joe K S; Tse, M L; Ng, Andy H W; Stenstrom, Robert
2003-11-01
To compare the diagnostic accuracy of emergency department (ED) physicians with the World Health Organization (WHO) case definition in a large community-based SARS (severe acute respiratory syndrome) cohort. This was a cohort study of all patients from Hong Kong's Amoy Garden complex who presented to an ED SARS screening clinic during a 2-month outbreak. Clinical findings and WHO case definition criteria were recorded, along with ED diagnoses. Final diagnoses were established independently based on relevant diagnostic tests performed after the ED visit. Emergency physician diagnostic accuracy was compared with that of the WHO SARS case definition. Sensitivity, specificity, predictive values and likelihood ratios were calculated using standard formulae. During the study period, 818 patients presented with SARS-like symptoms, including 205 confirmed SARS, 35 undetermined SARS and 578 non-SARS. Sensitivity, specificity and accuracy were 91%, 96% and 94% for ED clinical diagnosis, versus 42%, 86% and 75% for the WHO case definition. Positive likelihood ratios (LR+) were 21.1 for physician judgement and 3.1 for the WHO criteria. Negative likelihood ratios (LR-) were 0.10 for physician judgement and 0.67 for the WHO criteria, indicating that clinician judgement was a much more powerful predictor than the WHO criteria. Physician clinical judgement was more accurate than the WHO case definition. Reliance on the WHO case definition as a SARS screening tool may lead to an unacceptable rate of misdiagnosis. The SARS case definition must be revised if it is to be used as a screening tool in emergency departments and primary care settings.
Wongwai, Phanthipha; Anupongongarch, Pacharapan; Suwannaraj, Sirinya; Asawaphureekorn, Somkiat
2016-08-01
To evaluate the prevalence of visual impairment of children aged four to six years in Khon Kaen City Municipality, Thailand. The visual acuity test was performed on 1,286 children in kindergarten schools located in Khon Kaen Municipality. The first test of visual acuity was done by trained teachers and the second test by the pediatric ophthalmologist. The prevalence of visual impairment of both tests was recorded including sensitivity, specificity, likelihood ratio, and predictive value of the test by teachers. The causes of visual impairment were also recorded. There were 39 children with visual impairment from the test by the teacher and 12 children from the test by the ophthalmologist. Myopia is the single cause of visual impairment. Mean spherical equivalence is 1.375 diopters (SD = 0.53). Median spherical equivalence is 1.375 diopters (minimum = 0.5, maximum =4). The detection of visual impairment by trained teachers had a sensitivity of 1.00 (95% CI 0.76-1.00), specificity of 0.98 (95% CI 0.97-0.99), likelihood ratio for a positive test 44.58 (95% CI 30.32-65.54), likelihood ratio for a negative test 0.04 (95% CI 0.003-0.60), positive predictive value of 0.31 (95% CI 0.19-0.47), and negative predictive value of 1.00 (95% CI 0.99-1.00). The prevalence of visual impairment among children aged four to six year old is 0.9%. Trained teachers can be examiners for screening purpose.
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
A data fusion approach to indications and warnings of terrorist attacks
NASA Astrophysics Data System (ADS)
McDaniel, David; Schaefer, Gregory
2014-05-01
Indications and Warning (I&W) of terrorist attacks, particularly IED attacks, require detection of networks of agents and patterns of behavior. Social Network Analysis tries to detect a network; activity analysis tries to detect anomalous activities. This work builds on both to detect elements of an activity model of terrorist attack activity - the agents, resources, networks, and behaviors. The activity model is expressed as RDF triples statements where the tuple positions are elements or subsets of a formal ontology for activity models. The advantage of a model is that elements are interdependent and evidence for or against one will influence others so that there is a multiplier effect. The advantage of the formality is that detection could occur hierarchically, that is, at different levels of abstraction. The model matching is expressed as a likelihood ratio between input text and the model triples. The likelihood ratio is designed to be analogous to track correlation likelihood ratios common in JDL fusion level 1. This required development of a semantic distance metric for positive and null hypotheses as well as for complex objects. The metric uses the Web 1Terabype database of one to five gram frequencies for priors. This size requires the use of big data technologies so a Hadoop cluster is used in conjunction with OpenNLP natural language and Mahout clustering software. Distributed data fusion Map Reduce jobs distribute parts of the data fusion problem to the Hadoop nodes. For the purposes of this initial testing, open source models and text inputs of similar complexity to terrorist events were used as surrogates for the intended counter-terrorist application.
Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.
Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier
2017-02-01
Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Greer, Joy A; Zelig, Craig M; Choi, Kenny K; Rankins, Nicole Calloway; Chauhan, Suneet P; Magann, Everett F
2012-08-01
To compare the likelihood of being within weight standards before and after pregnancy between United States Marine Corps (USMC) and Navy (USN) active duty women (ADW). ADW with singleton gestations who delivered at a USMC base were followed for 6 months to determine likelihood of returning to military weight standards. Odds ratio (OR), adjusted odds ratio (AOR) and 95% confidence intervals were calculated; p < 0.05 was considered significant. Similar proportions of USN and USMC ADW were within body weight standards one year prior to pregnancy (79%, 97%) and at first prenatal visit (69%, 96%), respectively. However, USMC ADW were significantly more likely to be within body weight standards at 3 months (AOR 4.30,1.28-14.43) and 6 months after delivery (AOR 9.94, 1.53-64.52) than USN ADW. Weight gained during pregnancy did not differ significantly for the two groups (40.4 lbs vs 44.2 lbs, p = 0.163). The likelihood of spontaneous vaginal delivery was significantly higher (OR 2.52, 1.20-5.27) and the mean birth weight was significantly lower (p = 0.0036) among USMC ADW as compared to USN ADW. Being within weight standards differs significantly for USMC and USN ADW after pregnancy.
Female breast symptoms in patients attended in the family medicine practice.
González-Pérez, Brian; Salas-Flores, Ricardo; Sosa-López, María Lucero; Barrientos-Guerrero, Carlos Eduardo; Hernández-Aguilar, Claudia Magdalena; Gómez-Contreras, Diana Edith; Sánchez-Garza, Jorge Arturo
2013-01-01
there are few studies on breast symptoms (BS) in patients attended at primary care units in Mexico. The aim was to determine the frequency and types of BS overall and by age-group and establish which BS were related to diagnosis of breast cancer. data from all female patients with a breast-disease-related diagnosis, attended from 2006 to 2010, at the Family Medicine Unit 38, were collected. The frequencies of BS were determined by four age-groups (< 19, 20-49, 50-69, > 70 years) and likelihood ratios for breast cancer for each breast-related symptom patient, with a 95 % confidence interval (CI). the most frequent BS in the study population were lump/mass (71.7 %) and breast pain (67.7 %) of all breast complaints, and they were more noted in women age group of 20-49 years. Overall, 120 women had breast cancer diagnosed with a median age of 53.51 + 12.7 years. Breast lump/mass had positive likelihood ratios for breast cancer 4.53 (95 % CI = 2.51-8.17) and breast pain had increased negative LR = 1.08 (95 % CI = 1.05-1.11). breast lump/mass was the predominant presenting complaint among females with breast symptoms in our primary care unit, and it was associated with elevated positive likelihood of breast cancer.
Cherven, Brooke; Mertens, Ann; Meacham, Lillian R; Williamson, Rebecca; Boring, Cathy; Wasilewski-Masker, Karen
2014-01-01
Survivors of childhood cancer are at risk for a variety of treatment-related late effects and require lifelong individualized surveillance for early detection of late effects. This study assessed knowledge and perceptions of late effects risk before and after a survivor clinic visit. Young adult survivors (≥ 16 years) and parents of child survivors (< 16 years) were recruited prior to initial visit to a cancer survivor program. Sixty-five participants completed a baseline survey and 50 completed both a baseline and follow-up survey. Participants were found to have a low perceived likelihood of developing a late effect of cancer therapy and many incorrect perceptions of risk for individual late effects. Low knowledge before clinic (odds ratio = 9.6; 95% confidence interval, 1.7-92.8; P = .02) and low perceived likelihood of developing a late effect (odds ratio = 18.7; 95% confidence interval, 2.7-242.3; P = .01) were found to predict low knowledge of late effect risk at follow-up. This suggests that perceived likelihood of developing a late effect is an important factor in the individuals' ability to learn about their risk and should be addressed before initiation of education. © 2014 by Association of Pediatric Hematology/Oncology Nurses.
Artificial intelligence-assisted occupational lung disease diagnosis.
Harber, P; McCoy, J M; Howard, K; Greer, D; Luo, J
1991-08-01
An artificial intelligence expert-based system for facilitating the clinical recognition of occupational and environmental factors in lung disease has been developed in a pilot fashion. It utilizes a knowledge representation scheme to capture relevant clinical knowledge into structures about specific objects (jobs, diseases, etc) and pairwise relations between objects. Quantifiers describe both the closeness of association and risk, as well as the degree of belief in the validity of a fact. An independent inference engine utilizes the knowledge, combining likelihoods and uncertainties to achieve estimates of likelihood factors for specific paths from work to illness. The system creates a series of "paths," linking work activities to disease outcomes. One path links a single period of work to a single possible disease outcome. In a preliminary trial, the number of "paths" from job to possible disease averaged 18 per subject in a general population and averaged 25 per subject in an asthmatic population. Artificial intelligence methods hold promise in the future to facilitate diagnosis in pulmonary and occupational medicine.
Staff gender ratio and aggression in a forensic psychiatric hospital.
Daffern, Michael; Mayer, Maggie; Martin, Trish
2006-06-01
Gender balance in acute psychiatric inpatient units remains a contentious issue. In terms of maintaining staff and patient safety, 'balance' is often considered by ensuring there are 'sufficient' male nurses present on each shift. In an ongoing programme of research into aggression, the authors investigated reported incidents of patient aggression and examined the gender ratio on each shift over a 6-month period. Contrary to the popular notion that a particular gender ratio might have some relationship with the likelihood of aggressive incidents, there was no statistically significant difference in the proportion of male staff working on the shifts when there was an aggressive incident compared with the shifts when there was no aggressive incident. Further, when an incident did occur, the severity of the incident bore no relationship with the proportion of male staff working on the shift. Nor did the gender of the shift leader have an impact on the decision to seclude the patient or the likelihood of completing an incident form following an aggressive incident. Staff confidence in managing aggression may be influenced by the presence of male staff. Further, aspects of prevention and management may be influenced by staff gender. However, results suggest there is no evidence that the frequency or severity of aggression is influenced by staff gender ratio.
Lead isotope ratios for bullets, forensic evaluation in a Bayesian paradigm.
Sjåstad, Knut-Endre; Lucy, David; Andersen, Tom
2016-01-01
Forensic science is a discipline concerned with collection, examination and evaluation of physical evidence related to criminal cases. The results from the activities of the forensic scientist may ultimately be presented to the court in such a way that the triers of fact understand the implications of the data. Forensic science has been, and still is, driven by development of new technology, and in the last two decades evaluation of evidence based on logical reasoning and Bayesian statistic has reached some level of general acceptance within the forensic community. Tracing of lead fragments of unknown origin to a given source of ammunition is a task that might be of interest for the Court. Use of data from lead isotope ratios analysis interpreted within a Bayesian framework has shown to be suitable method to guide the Court to draw their conclusion for such task. In this work we have used isotopic composition of lead from small arms projectiles (cal. .22) and developed an approach based on Bayesian statistics and likelihood ratio calculation. The likelihood ratio is a single quantity that provides a measure of the value of evidence that can be used in the deliberation of the court. Copyright © 2015 Elsevier B.V. All rights reserved.
Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann
2014-04-01
Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
Rodríguez-Escudero, Juan Pablo; López-Jiménez, Francisco; Trejo-Gutiérrez, Jorge F
2011-01-01
This article reviews different characteristics of validity in a clinical diagnostic test. In particular, we emphasize the likelihood ratio as an instrument that facilitates the use of epidemiologic concepts in clinical diagnosis.
Pan, Liping; Jia, Hongyan; Liu, Fei; Gao, Mengqiu; Sun, Huishan; Du, Boping; Sun, Qi; Xing, Aiying; Wei, Rongrong; Zhang, Zongde
2015-12-01
To evaluate the value of T-SPOT.TB assay in the diagnosis of pulmonary tuberculosis within different age groups. We analyzed 1 518 suspected pulmonary tuberculosis (PTB) patients who were admitted to the Beijing Chest Hospital from November 2012 to February 2014 and had valid T-SPOT.TB tests before anti-tuberculosis therapy. The 599 microbiologically and/or histopathologically-confirmed PTB patients (16-89 years old, 388 males and 211 females) and 235 non-TB patients (14-85 years old, 144 males and 91 females) were enrolled for the analysis of diagnostic performance of T-SPOT.TB, while patients with uncertain diagnosis or diagnosis based on clinical impression (n=684) were excluded from the analysis. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB were analyzed according to the final diagnosis. Furthermore, the diagnostic performance of T-SPOT.TB assay in the younger patients (14-59 years old) and elderly patients (60-89 years old) were also analyzed respectively. Categorical variables were compared by Pearson's Chi-square test, while continuous variables were compared by the Mann-Whitney U-test. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio of the T-SPOT.TB in diagnosis of PTB were 90.1% (540/599), 65.5% (154/235), 86.9% (540/621), 72.3% (154/213), 2.61, and 0.15, respectively. The sensitivity and specificity of T-SPOT.TB assay were 92.6% (375/405) and 75.6% (99/131), respectively in the younger patients, and 85.0% (165/194), 52.9% (55/104) respectively in the elderly patients. The sensitivity and specificity of T-SPOT.TB assay in the younger patients were significantly higher than those in the elderly patients (P<0.01), and the spot forming cells in the younger PTB patients were significantly higher than in the elderly PTB patients [300 (126, 666)/10(6) PBMCs vs. 258 (79, 621)/10(6) PBMCs, P=0.037]. T-SPOT.TB is a promising test in the diagnosis of younger patients (14-59 years old) with suspected PTB, but the diagnostic performance in elderly patients (60-89 years old) is relatively reduced.
Iodine supplementation for women during the preconception, pregnancy and postpartum period.
Harding, Kimberly B; Peña-Rosas, Juan Pablo; Webster, Angela C; Yap, Constance My; Payne, Brian A; Ota, Erika; De-Regil, Luz Maria
2017-03-05
Iodine is an essential nutrient required for the biosynthesis of thyroid hormones, which are responsible for regulating growth, development and metabolism. Iodine requirements increase substantially during pregnancy and breastfeeding. If requirements are not met during these periods, the production of thyroid hormones may decrease and be inadequate for maternal, fetal and infant needs. The provision of iodine supplements may help meet the increased iodine needs during pregnancy and the postpartum period and prevent or correct iodine deficiency and its consequences. To assess the benefits and harms of supplementation with iodine, alone or in combination with other vitamins and minerals, for women in the preconceptional, pregnancy or postpartum period on their and their children's outcomes. We searched Cochrane Pregnancy and Childbirth's Trials Register (14 November 2016), and the WHO International Clinical Trials Registry Platform (ICTRP) (17 November 2016), contacted experts in the field and searched the reference lists of retrieved studies and other relevant papers. Randomized and quasi-randomized controlled trials with randomisation at either the individual or cluster level comparing injected or oral iodine supplementation (such as tablets, capsules, drops) during preconception, pregnancy or the postpartum period irrespective of iodine compound, dose, frequency or duration. Two review authors independently assessed trial eligibility, risk of bias, extracted data and conducted checks for accuracy. We used the GRADE approach to assess the quality of the evidence for primary outcomes.We anticipated high heterogeneity among trials, and we pooled trial results using random-effects models and were cautious in our interpretation of the pooled results. We included 14 studies and excluded 48 studies. We identified five ongoing or unpublished studies and two studies are awaiting classification. Eleven trials involving over 2700 women contributed data for the comparisons in this review (in three trials, the primary or secondary outcomes were not reported). Maternal primary outcomesIodine supplementation decreased the likelihood of the adverse effect of postpartum hyperthyroidism by 68% (average risk ratio (RR) 0.32; 95% confidence interval (CI) 0.11 to 0.91, three trials in mild to moderate iodine deficiency settings, 543 women, no statistical heterogeneity, low-quality evidence) and increased the likelihood of the adverse effect of digestive intolerance in pregnancy by 15 times (average RR 15.33; 95% CI 2.07 to 113.70, one trial in a mild-deficiency setting, 76 women, very low-quality evidence).There were no clear differences between groups for hypothyroidism in pregnancy or postpartum (pregnancy: average RR 1.90; 95% CI 0.57 to 6.38, one trial, 365 women, low-quality evidence, and postpartum: average RR 0.44; 95% CI 0.06 to 3.42, three trials, 540 women, no statistical heterogeneity, low-quality evidence), preterm birth (average RR 0.71; 95% CI 0.30 to 1.66, two trials, 376 women, statistical heterogeneity, low-quality evidence) or the maternal adverse effects of elevated thyroid peroxidase antibodies (TPO-ab) in pregnancy or postpartum (average RR 0.95; 95% CI 0.44 to 2.07, one trial, 359 women, low-quality evidence, average RR 1.01; 95% CI 0.78 to 1.30, three trials, 397 women, no statistical heterogeneity, low-quality evidence), or hyperthyroidism in pregnancy (average RR 1.90; 95% CI 0.57 to 6.38, one trial, 365 women, low-quality evidence). All of the trials contributing data to these outcomes took place in settings with mild to moderate iodine deficiency. Infant/child primary outcomesCompared with those who did not receive iodine, those who received iodine supplements had a 34% lower likelihood of perinatal mortality, however this difference was not statistically significant (average RR 0.66; 95% CI 0.42 to 1.03, two trials, 457 assessments, low-quality evidence). All of the perinatal deaths occurred in one trial conducted in a severely iodine-deficient setting. There were no clear differences between groups for low birthweight (average RR 0.56; 95% CI 0.26 to 1.23, two trials, 377 infants, no statistical heterogeneity, low-quality evidence), neonatal hypothyroidism/elevated thyroid-stimulating hormone (TSH) (average RR 0.58; 95% CI 0.11 to 3.12, two trials, 260 infants, very low-quality evidence) or the adverse effect of elevated neonatal thyroid peroxidase antibodies (TPO-ab) (average RR 0.61; 95% CI 0.07 to 5.70, one trial, 108 infants, very low-quality evidence). All of the trials contributing data to these outcomes took place in areas with mild to moderate iodine deficiency. No trials reported on hypothyroidism/elevated TSH or any adverse effect beyond the neonatal period. There were insufficient data to reach any meaningful conclusions on the benefits and harms of routine iodine supplementation in women before, during or after pregnancy. The available evidence suggested that iodine supplementation decreases the likelihood of postpartum hyperthyroidism and increases the likelihood of the adverse effect of digestive intolerance in pregnancy - both considered potential adverse effects. We considered evidence for these outcomes low or very low quality, however, because of study design limitations and wide confidence intervals. In addition, due to the small number of trials and included women in our meta-analyses, these findings must be interpreted with caution. There were no clear effects on other important maternal or child outcomes though these findings must also be interpreted cautiously due to limited data and low-quality trials. Additionally, almost all of the evidence came from settings with mild or moderate iodine deficiency and therefore may not be applicable to settings with severe deficiency.More high-quality randomised controlled trials are needed on iodine supplementation before, during and after pregnancy on maternal and infant/child outcomes. However, it may be unethical to compare iodine to placebo or no treatment in severe deficiency settings. Trials may also be unfeasible in settings where pregnant and lactating women commonly take prenatal supplements with iodine. Information is needed on optimal timing of initiation as well as supplementation regimen and dose. Future trials should consider the outcomes in this review and follow children beyond the neonatal period. Future trials should employ adequate sample sizes, assess potential adverse effects (including the nature and extent of digestive intolerance), and be reported in a way that allows assessment of risk of bias, full data extraction and analysis by the subgroups specified in this review.
Moore, Christopher L.; Daniels, Brock; Singh, Dinesh; Luty, Seth; Gunabushanam, Gowthaman; Ghita, Monica; Molinaro, Annette; Gross, Cary P.
2016-01-01
Purpose To determine if a reduced-dose computed tomography (CT) protocol could effectively help to identify patients in the emergency department (ED) with moderate to high likelihood of calculi who would require urologic intervention within 90 days. Materials and Methods The study was approved by the institutional review board and written informed consent with HIPAA authorization was obtained. This was a prospective, single-center study of patients in the ED with moderate to high likelihood of ureteral stone undergoing CT imaging. Objective likelihood of ureteral stone was determined by using the previously derived and validated STONE clinical prediction rule, which includes five elements: sex, timing, origin, nausea, and erythrocytes. All patients with high STONE score (STONE score, 10–13) underwent reduced-dose CT, while those with moderate likelihood of ureteral stone (moderate STONE score, 6–9) underwent reduced-dose CT or standard CT based on clinician discretion. Patients were followed to 90 days after initial imaging for clinical course and for the primary outcome of any intervention. Statistics are primarily descriptive and are reported as percentages, sensitivities, and specificities with 95% confidence intervals. Results There were 264 participants enrolled and 165 reduced-dose CTs performed; of these participants, 108 underwent reduced-dose CT alone with complete follow-up. Overall, 46 of 264 (17.4%) of patients underwent urologic intervention, and 25 of 108 (23.1%) patients who underwent reduced-dose CT underwent a urologic intervention; all were correctly diagnosed on the clinical report of the reduced-dose CT (sensitivity, 100%; 95% confidence interval: 86.7%, 100%). The average dose-length product for all standard-dose CTs was 857 mGy · cm ± 395 compared with 101 mGy · cm ± 39 for all reduced-dose CTs (average dose reduction, 88.2%). There were five interventions for nonurologic causes, three of which were urgent and none of which were missed when reduced-dose CT was performed. Conclusion A CT protocol with over 85% dose reduction can be used in patients with moderate to high likelihood of ureteral stone to safely and effectively identify patients in the ED who will require urologic intervention. PMID:26943230
A parimutuel gambling perspective to compare probabilistic seismicity forecasts
NASA Astrophysics Data System (ADS)
Zechar, J. Douglas; Zhuang, Jiancang
2014-10-01
Using analogies to gaming, we consider the problem of comparing multiple probabilistic seismicity forecasts. To measure relative model performance, we suggest a parimutuel gambling perspective which addresses shortcomings of other methods such as likelihood ratio, information gain and Molchan diagrams. We describe two variants of the parimutuel approach for a set of forecasts: head-to-head, in which forecasts are compared in pairs, and round table, in which all forecasts are compared simultaneously. For illustration, we compare the 5-yr forecasts of the Regional Earthquake Likelihood Models experiment for M4.95+ seismicity in California.
1993-09-10
1993). A bootstrap generalizedlikelihood ratio test in discriminant analysis, Proc. 15th Annual Seismic Research Symposium, in press. I Hedlin, M., J... ratio indicate that the event does not belong to the first class. The bootstrap technique is used here as well to set the critical value of the test ...Methodist University. Baek, J., H. L. Gray, W. A. Woodward and M.D. Fisk (1993). A Bootstrap Generalized Likelihood Ratio Test in Discriminant
Correa-Burrows, Paulina; Rodríguez, Yanina; Blanco, Estela; Gahagan, Sheila; Burrows, Raquel
2017-01-01
Although numerous studies have approached the effects of exposure to a Western diet (WD) on academic outcomes, very few have focused on foods consumed during snack times. We explored whether there is a link between nutritious snacking habits and academic achievement in high school (HS) students from Santiago, Chile. We conducted a cross-sectional study with 678 adolescents. The nutritional quality of snacks consumed by 16-year-old was assessed using a validated food frequency questionnaire. The academic outcomes measured were HS grade point average (GPA), the likelihood of HS completion, and the likelihood of taking college entrance exams. A multivariate analysis was performed to determine the independent associations of nutritious snacking with having completed HS and having taken college entrance exams. An analysis of covariance (ANCOVA) estimated the differences in GPA by the quality of snacks. Compared to students with healthy in-home snacking behaviors, adolescents having unhealthy in-home snacks had significantly lower GPAs (M difference: −40.1 points, 95% confidence interval (CI): −59.2, −16.9, d = 0.41), significantly lower odds of HS completion (adjusted odds ratio (aOR): 0.47; 95% CI: 0.25–0.88), and significantly lower odds of taking college entrance exams (aOR: 0.53; 95% CI: 0.31–0.88). Unhealthy at-school snacking showed similar associations with the outcome variables. Poor nutritional quality snacking at school and at home was associated with poor secondary school academic achievement and the intention to enroll in higher education. PMID:28448455
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)
2016-05-01
case of cognitive radio applications. Modulation classification is part of a broader problem known as blind or uncooperative demodulation the goal of...Introduction 2 2.1 Modulation Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Research Objectives...6 3 Modulation Classification Methods 7 3.0.1 Ad Hoc
Helms Tillery, S I; Taylor, D M; Schwartz, A B
2003-01-01
We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Shi, Jia-Xin; Li, Jia-Shu; Hu, Rong; Li, Chun-Hua; Wen, Yan; Zheng, Hong; Zhang, Feng; Li, Qin
2013-01-01
The serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) is a useful biomarker in differentiating bacterial infections from others. However, the diagnostic value of sTREM-1 in bronchoalveolar lavage fluid (BALF) in lung infections has not been well established. We performed a meta-analysis to assess the accuracy of sTREM-1 in BALF for diagnosis of bacterial lung infections in intensive care unit (ICU) patients. We searched PUBMED, EMBASE and Web of Knowledge (from January 1966 to October 2012) databases for relevant studies that reported diagnostic accuracy data of BALF sTREM-1 in the diagnosis of bacterial lung infections in ICU patients. Pooled sensitivity, specificity, and positive and negative likelihood ratios were calculated by a bivariate regression analysis. Measures of accuracy and Q point value (Q*) were calculated using summary receiver operating characteristic (SROC) curve. The potential between-studies heterogeneity was explored by subgroup analysis. Nine studies were included in the present meta-analysis. Overall, the prevalence was 50.6%; the sensitivity was 0.87 (95% confidence interval (CI), 0.72-0.95); the specificity was 0.79 (95% CI, 0.56-0.92); the positive likelihood ratio (PLR) was 4.18 (95% CI, 1.78-9.86); the negative likelihood ratio (NLR) was 0.16 (95% CI, 0.07-0.36), and the diagnostic odds ratio (DOR) was 25.60 (95% CI, 7.28-89.93). The area under the SROC curve was 0.91 (95% CI, 0.88-0.93), with a Q* of 0.83. Subgroup analysis showed that the assay method and cutoff value influenced the diagnostic accuracy of sTREM-1. BALF sTREM-1 is a useful biomarker of bacterial lung infections in ICU patients. Further studies are needed to confirm the optimized cutoff value.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Thomas J.; Brown, Richard S.; Stephenson, John R.
Each year, millions of fish have telemetry tags (acoustic, radio, inductive) surgically implanted to assess their passage and survival through hydropower facilities. One route of passage of particular concern is through hydro turbines, in which fish may be exposed to a range of potential injuries, including barotraumas from rapid decompression. The change in pressure from acclimation to exposure (nadir) has been found to be an important factor in predicting the likelihood of mortality and injury for juvenile Chinook salmon undergoing rapid decompression associated with simulated turbine passage. The presence of telemetry tags has also been shown to influence the likelihoodmore » of injury and mortality for juvenile Chinook salmon. This research investigated the likelihood of mortality and injury for juvenile Chinook salmon carrying telemetry tags and exposed to a range of simulated turbine passage. Several factors were examined as predictors of mortal injury for fish undergoing rapid decompression, and the ratio of pressure change and tag burden were determined to be the most predictive factors. As the ratio of pressure change and tag burden increase, the likelihood of mortal injury also increases. The results of this study suggest that previous survival estimates of juvenile Chinook salmon passing through hydro turbines may have been biased due to the presence of telemetry tags, and this has direct implications to the management of hydroelectric facilities. Realistic examples indicate how the bias in turbine passage survival estimates could be 20% or higher, depending on the mass of the implanted tags and the ratio of acclimation to exposure pressures. Bias would increase as the tag burden and pressure ratio increase, and have direct implications on survival estimates. It is recommended that future survival studies use the smallest telemetry tags possible to minimize the potential bias that may be associated with carrying the tag.« less
A Maximum Likelihood Approach to Functional Mapping of Longitudinal Binary Traits
Wang, Chenguang; Li, Hongying; Wang, Zhong; Wang, Yaqun; Wang, Ningtao; Wang, Zuoheng; Wu, Rongling
2013-01-01
Despite their importance in biology and biomedicine, genetic mapping of binary traits that change over time has not been well explored. In this article, we develop a statistical model for mapping quantitative trait loci (QTLs) that govern longitudinal responses of binary traits. The model is constructed within the maximum likelihood framework by which the association between binary responses is modeled in terms of conditional log odds-ratios. With this parameterization, the maximum likelihood estimates (MLEs) of marginal mean parameters are robust to the misspecification of time dependence. We implement an iterative procedures to obtain the MLEs of QTL genotype-specific parameters that define longitudinal binary responses. The usefulness of the model was validated by analyzing a real example in rice. Simulation studies were performed to investigate the statistical properties of the model, showing that the model has power to identify and map specific QTLs responsible for the temporal pattern of binary traits. PMID:23183762
Accounting for informatively missing data in logistic regression by means of reassessment sampling.
Lin, Ji; Lyles, Robert H
2015-05-20
We explore the 'reassessment' design in a logistic regression setting, where a second wave of sampling is applied to recover a portion of the missing data on a binary exposure and/or outcome variable. We construct a joint likelihood function based on the original model of interest and a model for the missing data mechanism, with emphasis on non-ignorable missingness. The estimation is carried out by numerical maximization of the joint likelihood function with close approximation of the accompanying Hessian matrix, using sharable programs that take advantage of general optimization routines in standard software. We show how likelihood ratio tests can be used for model selection and how they facilitate direct hypothesis testing for whether missingness is at random. Examples and simulations are presented to demonstrate the performance of the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.
Al-Radi, Osman O; Harrell, Frank E; Caldarone, Christopher A; McCrindle, Brian W; Jacobs, Jeffrey P; Williams, M Gail; Van Arsdell, Glen S; Williams, William G
2007-04-01
The Aristotle Basic Complexity score and the Risk Adjustment in Congenital Heart Surgery system were developed by consensus to compare outcomes of congenital cardiac surgery. We compared the predictive value of the 2 systems. Of all index congenital cardiac operations at our institution from 1982 to 2004 (n = 13,675), we were able to assign an Aristotle Basic Complexity score, a Risk Adjustment in Congenital Heart Surgery score, and both scores to 13,138 (96%), 11,533 (84%), and 11,438 (84%) operations, respectively. Models of in-hospital mortality and length of stay were generated for Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery using an identical data set in which both Aristotle Basic Complexity and Risk Adjustment in Congenital Heart Surgery scores were assigned. The likelihood ratio test for nested models and paired concordance statistics were used. After adjustment for year of operation, the odds ratios for Aristotle Basic Complexity score 3 versus 6, 9 versus 6, 12 versus 6, and 15 versus 6 were 0.29, 2.22, 7.62, and 26.54 (P < .0001). Similarly, odds ratios for Risk Adjustment in Congenital Heart Surgery categories 1 versus 2, 3 versus 2, 4 versus 2, and 5/6 versus 2 were 0.23, 1.98, 5.80, and 20.71 (P < .0001). Risk Adjustment in Congenital Heart Surgery added significant predictive value over Aristotle Basic Complexity (likelihood ratio chi2 = 162, P < .0001), whereas Aristotle Basic Complexity contributed much less predictive value over Risk Adjustment in Congenital Heart Surgery (likelihood ratio chi2 = 13.4, P = .009). Neither system fully adjusted for the child's age. The Risk Adjustment in Congenital Heart Surgery scores were more concordant with length of stay compared with Aristotle Basic Complexity scores (P < .0001). The predictive value of Risk Adjustment in Congenital Heart Surgery is higher than that of Aristotle Basic Complexity. The use of Aristotle Basic Complexity or Risk Adjustment in Congenital Heart Surgery as risk stratification and trending tools to monitor outcomes over time and to guide risk-adjusted comparisons may be valuable.
Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao
2014-01-01
Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179
Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation
Meyer, Karin
2016-01-01
Multivariate estimates of genetic parameters are subject to substantial sampling variation, especially for smaller data sets and more than a few traits. A simple modification of standard, maximum-likelihood procedures for multivariate analyses to estimate genetic covariances is described, which can improve estimates by substantially reducing their sampling variances. This is achieved by maximizing the likelihood subject to a penalty. Borrowing from Bayesian principles, we propose a mild, default penalty—derived assuming a Beta distribution of scale-free functions of the covariance components to be estimated—rather than laboriously attempting to determine the stringency of penalization from the data. An extensive simulation study is presented, demonstrating that such penalties can yield very worthwhile reductions in loss, i.e., the difference from population values, for a wide range of scenarios and without distorting estimates of phenotypic covariances. Moreover, mild default penalties tend not to increase loss in difficult cases and, on average, achieve reductions in loss of similar magnitude to computationally demanding schemes to optimize the degree of penalization. Pertinent details required for the adaptation of standard algorithms to locate the maximum of the likelihood function are outlined. PMID:27317681
Cheng, Yvonne W; Snowden, Jonathan M; Handler, Stephanie; Tager, Ira B; Hubbard, Alan; Caughey, Aaron B
2014-08-01
Little data exist regarding clinicians' role in the rising annual incidence rate of cesarean delivery in the US. We aimed to examine if clinicians' practice environment is associated with recommending cesarean deliveries. This is a survey study of clinicians who practice obstetrics in the US. This survey included eight clinical vignettes and 27 questions regarding clinicians' practice environment. Chi-square test and multivariable logistic regression were used for statistical comparison. Of 27 675 survey links sent, 3646 clinicians received and opened the survey electronically, and 1555 (43%) participated and 1486 (94%) completed the survey. Clinicians were categorized into three groups based on eight common obstetric vignettes as: more likely (n = 215), average likelihood (n = 1099), and less likely (n = 168) to recommend cesarean. Clinician environment factors associated with a higher likelihood of recommending cesarean included Laborists/Hospitalists practice model (p < 0.001), as-needed anesthesia support (p = 0.003), and rural/suburban practice setting (p < 0.001). We identified factors in clinicians' environment associated with their likelihood of recommending cesarean delivery. The decision to recommend cesarean delivery is a complicated one and is likely not solely based on patient factors.
The Fecal Microbiota Profile and Bronchiolitis in Infants
Linnemann, Rachel W.; Mansbach, Jonathan M.; Ajami, Nadim J.; Espinola, Janice A.; Petrosino, Joseph F.; Piedra, Pedro A.; Stevenson, Michelle D.; Sullivan, Ashley F.; Thompson, Amy D.; Camargo, Carlos A.
2016-01-01
BACKGROUND: Little is known about the association of gut microbiota, a potentially modifiable factor, with bronchiolitis in infants. We aimed to determine the association of fecal microbiota with bronchiolitis in infants. METHODS: We conducted a case–control study. As a part of multicenter prospective study, we collected stool samples from 40 infants hospitalized with bronchiolitis. We concurrently enrolled 115 age-matched healthy controls. By applying 16S rRNA gene sequencing and an unbiased clustering approach to these 155 fecal samples, we identified microbiota profiles and determined the association of microbiota profiles with likelihood of bronchiolitis. RESULTS: Overall, the median age was 3 months, 55% were male, and 54% were non-Hispanic white. Unbiased clustering of fecal microbiota identified 4 distinct profiles: Escherichia-dominant profile (30%), Bifidobacterium-dominant profile (21%), Enterobacter/Veillonella-dominant profile (22%), and Bacteroides-dominant profile (28%). The proportion of bronchiolitis was lowest in infants with the Enterobacter/Veillonella-dominant profile (15%) and highest in the Bacteroides-dominant profile (44%), corresponding to an odds ratio of 4.59 (95% confidence interval, 1.58–15.5; P = .008). In the multivariable model, the significant association between the Bacteroides-dominant profile and a greater likelihood of bronchiolitis persisted (odds ratio for comparison with the Enterobacter/Veillonella-dominant profile, 4.24; 95% confidence interval, 1.56–12.0; P = .005). In contrast, the likelihood of bronchiolitis in infants with the Escherichia-dominant or Bifidobacterium-dominant profile was not significantly different compared with those with the Enterobacter/Veillonella-dominant profile. CONCLUSIONS: In this case–control study, we identified 4 distinct fecal microbiota profiles in infants. The Bacteroides-dominant profile was associated with a higher likelihood of bronchiolitis. PMID:27354456
Wakefield, Melanie; Terry-McElrath, Yvonne; Emery, Sherry; Saffer, Henry; Chaloupka, Frank J; Szczypka, Glen; Flay, Brian; O'Malley, Patrick M; Johnston, Lloyd D
2006-12-01
To relate exposure to televised youth smoking prevention advertising to youths' smoking beliefs, intentions, and behaviors. We obtained commercial television ratings data from 75 US media markets to determine the average youth exposure to tobacco company youth-targeted and parent-targeted smoking prevention advertising. We merged these data with nationally representative school-based survey data (n = 103,172) gathered from 1999 to 2002. Multivariate regression models controlled for individual, geographic, and tobacco policy factors, and other televised antitobacco advertising. There was little relation between exposure to tobacco company-sponsored, youth-targeted advertising and youth smoking outcomes. Among youths in grades 10 and 12, during the 4 months leading up to survey administration, each additional viewing of a tobacco company parent-targeted advertisement was, on average, associated with lower perceived harm of smoking (odds ratio [OR]=0.93; confidence interval [CI]=0.88, 0.98), stronger approval of smoking (OR=1.11; CI=1.03,1.20), stronger intentions to smoke in the future (OR=1.12; CI=1.04,1.21), and greater likelihood of having smoked in the past 30 days (OR=1.12; CI=1.04,1.19). Exposure to tobacco company youth-targeted smoking prevention advertising generally had no beneficial outcomes for youths. Exposure to tobacco company parent-targeted advertising may have harmful effects on youth, especially among youths in grades 10 and 12.
Exploring Reference Group Effects on Teachers' Nominations of Gifted Students
ERIC Educational Resources Information Center
Rothenbusch, Sandra; Zettler, Ingo; Voss, Thamar; Lösch, Thomas; Trautwein, Ulrich
2016-01-01
Teachers are often asked to nominate students for enrichment programs for gifted children, and studies have repeatedly indicated that students' intelligence is related to their likelihood of being nominated as gifted. However, it is unknown whether class-average levels of intelligence influence teachers' nominations as suggested by theory--and…
Importance of Depression in Diabetes.
ERIC Educational Resources Information Center
Lustman, Patrick J.; Clouse, Ray E.; Anderson, Ryan J.
Depression doubles the likelihood of comorbid depression, which presents as major depression in 11% and subsyndromal depression in 31% of patients with the medical illness. The course of depression is chronic, and afflicted patients suffer an average of one episode annually. Depression has unique importance in diabetes because of its association…
Cost-Aware Design of a Discrimination Strategy for Unexploded Ordnance Cleanup
2011-02-25
Acronyms ANN: Artificial Neural Network AUC: Area Under the Curve BRAC: Base Realignment And Closure DLRT: Distance Likelihood Ratio Test EER...Discriminative Aggregate Nonparametric [25] Artificial Neural Network ANN Discriminative Aggregate Parametric [33] 11 Results and Discussion Task #1
NASA Astrophysics Data System (ADS)
Cui, Yong; Cao, Wenzhou; Li, Quan; Shen, Hua; Liu, Chao; Deng, Junpeng; Xu, Jiangfeng; Shao, Qiang
2016-05-01
Previous studies indicate that prostate cancer antigen 3 (PCA3) is highly expressed in prostatic tumors. However, its clinical value has not been characterized. The aim of this study was to investigate the clinical value of the urine PCA3 test in the diagnosis of prostate cancer by pooling the published data. Clinical trials utilizing the urine PCA3 test for diagnosing prostate cancer were retrieved from PubMed and Embase. A total of 46 clinical trials including 12,295 subjects were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio (+LR), negative likelihood ratio (-LR), diagnostic odds ratio (DOR) and area under the curve (AUC) were 0.65 (95% confidence interval [CI]: 0.63-0.66), 0.73 (95% CI: 0.72-0.74), 2.23 (95% CI: 1.91-2.62), 0.48 (95% CI: 0.44-0.52), 5.31 (95% CI: 4.19-6.73) and 0.75 (95% CI: 0.74-0.77), respectively. In conclusion, the urine PCA3 test has acceptable sensitivity and specificity for the diagnosis of prostate cancer and can be used as a non-invasive method for that purpose.
Likelihood ratio meta-analysis: New motivation and approach for an old method.
Dormuth, Colin R; Filion, Kristian B; Platt, Robert W
2016-03-01
A 95% confidence interval (CI) in an updated meta-analysis may not have the expected 95% coverage. If a meta-analysis is simply updated with additional data, then the resulting 95% CI will be wrong because it will not have accounted for the fact that the earlier meta-analysis failed or succeeded to exclude the null. This situation can be avoided by using the likelihood ratio (LR) as a measure of evidence that does not depend on type-1 error. We show how an LR-based approach, first advanced by Goodman, can be used in a meta-analysis to pool data from separate studies to quantitatively assess where the total evidence points. The method works by estimating the log-likelihood ratio (LogLR) function from each study. Those functions are then summed to obtain a combined function, which is then used to retrieve the total effect estimate, and a corresponding 'intrinsic' confidence interval. Using as illustrations the CAPRIE trial of clopidogrel versus aspirin in the prevention of ischemic events, and our own meta-analysis of higher potency statins and the risk of acute kidney injury, we show that the LR-based method yields the same point estimate as the traditional analysis, but with an intrinsic confidence interval that is appropriately wider than the traditional 95% CI. The LR-based method can be used to conduct both fixed effect and random effects meta-analyses, it can be applied to old and new meta-analyses alike, and results can be presented in a format that is familiar to a meta-analytic audience. Copyright © 2016 Elsevier Inc. All rights reserved.
Statistical inference methods for sparse biological time series data.
Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita
2011-04-25
Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.
Arellano, M; Garcia-Caselles, M P; Pi-Figueras, M; Miralles, R; Torres, R M; Aguilera, A; Cervera, A M
2004-01-01
It was aimed at evaluating the clinical usefulness of the mini nutritional assessment (MNA) to identify malnutrition in elderly patients with cognitive impairment, admitted to a geriatric convalescence unit (intermediate care facility). Sixty-three patients with cognitive impairment were studied. Cognitive impairment was considered when mini mental state examination (MMSE) scores were below 21. MNA and a nutritional evaluation according to the sequential model of the American Institute of Nutrition (AIN) were performed at admission. According to the AIN criteria, malnutrition was considered, if there were abnormalities in at least one of the following parameters: albumin, cholesterol, body mass index (BMI), and branchial circumference. Based on these criteria, 27 patients (42.8%) proved to be undernourished at admission, whereas if taking the original MNA scores, 39 patients (61.9%) were undernourished, 23 (36.5%) were at risk of malnutrition, and 1 (1.5%) was normal. The analyzed population was divided in four categories (quartiles) of the MNA scores: very low (= 13.5), low (> 13.5 and = 16), intermediate (> 16 and = 18.5) and high (> 18.5). Likelihood ratios of each MNA quartile were obtained by dividing the percentage of patients in a given MNA category who were undernourished (according to AIN) by the percentage of patients in the same MNA category who were not undernourished. In the very low MNA quartile, this likelihood ratio was 2.79 and for the low MNA quartile it was 0.49. For intermediate and high MNA categories, likelihood ratios were 1.0 and 0.07 respectively. In the present study, MNA identified undernourished patients with a high clinical diagnostic impact value only, when very low scores (= 13) are obtained.
Walsworth, Matthew K; Doukas, William C; Murphy, Kevin P; Mielcarek, Billie J; Michener, Lori A
2008-01-01
Glenoid labral tears provide a diagnostic challenge. Combinations of items in the patient history and physical examination will provide stronger diagnostic accuracy to suggest the presence or absence of glenoid labral tear than will individual items. Cohort study (diagnosis); Level of evidence, 1. History and examination findings in patients with shoulder pain (N = 55) were compared with arthroscopic findings to determine diagnostic accuracy and intertester reliability. The intertester reliability of the crank, anterior slide, and active compression tests was 0.20 to 0.24. A combined history of popping or catching and positive crank or anterior slide results yielded specificities of 0.91 and 1.00 and positive likelihood ratios of 3.0 and infinity, respectively. A positive anterior slide result combined with either a positive active compression or crank result yielded specificities of 0.91 and positive likelihood ratio of 2.75 and 3.75, respectively. Requiring only a single positive finding in the combination of popping or catching and the anterior slide or crank yielded sensitivities of 0.82 and 0.89 and negative likelihood ratios of 0.31 and 0.33, respectively. The diagnostic accuracy of individual tests in previous studies is quite variable, which may be explained in part by the modest reliability of these tests. The combination of popping or catching with a positive crank or anterior slide result or a positive anterior slide result with a positive active compression or crank test result suggests the presence of a labral tear. The combined absence of popping or catching and a negative anterior slide or crank result suggests the absence of a labral tear.
Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan
2016-04-01
The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.
Aragón-Sánchez, J; Lipsky, Benjamin A; Lázaro-Martínez, J L
2011-02-01
To investigate the accuracy of the sequential combination of the probe-to-bone test and plain X-rays for diagnosing osteomyelitis in the foot of patients with diabetes. We prospectively compiled data on a series of 338 patients with diabetes with 356 episodes of foot infection who were hospitalized in the Diabetic Foot Unit of La Paloma Hospital from 1 October 2002 to 31 April 2010. For each patient we did a probe-to-bone test at the time of the initial evaluation and then obtained plain X-rays of the involved foot. All patients with positive results on either the probe-to-bone test or plain X-ray underwent an appropriate surgical procedure, which included obtaining a bone specimen that was processed for histology and culture. We calculated the sensitivity, specificity, predictive values and likelihood ratios of the procedures, using the histopathological diagnosis of osteomyelitis as the criterion standard. Overall, 72.4% of patients had histologically proven osteomyelitis, 85.2% of whom had positive bone culture. The performance characteristics of both the probe-to-bone test and plain X-rays were excellent. The sequential diagnostic approach had a sensitivity of 0.97, specificity of 0.92, positive predictive value of 0.97, negative predictive value of 0.93, positive likelihood ratio of 12.8 and negative likelihood ratio of 0.02. Only 6.6% of patients with negative results on both diagnostic studies had osteomyelitis. Clinicians seeing patients in a setting similar to ours (specialized diabetic foot unit with a high prevalence of osteomyelitis) can confidently diagnose diabetic foot osteomyelitis when either the probe-to-bone test or a plain X-ray, or especially both, are positive. © 2011 The Authors. Diabetic Medicine © 2011 Diabetes UK.
Ablordeppey, Enyo A; Drewry, Anne M; Beyer, Alexander B; Theodoro, Daniel L; Fowler, Susan A; Fuller, Brian M; Carpenter, Christopher R
2017-04-01
We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared with chest radiography. PubMed, Embase, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov. Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2 × 2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Fifteen studies with 1,553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 (0.77-0.86) and 0.98 (0.97-0.99), respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 (14.72-65.78) and 0.25 (0.13-0.47). The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography.
Mohammadi, Seyed-Farzad; Sabbaghi, Mostafa; Z-Mehrjardi, Hadi; Hashemi, Hassan; Alizadeh, Somayeh; Majdi, Mercede; Taee, Farough
2012-03-01
To apply artificial intelligence models to predict the occurrence of posterior capsule opacification (PCO) after phacoemulsification. Farabi Eye Hospital, Tehran, Iran. Clinical-based cross-sectional study. The posterior capsule status of eyes operated on for age-related cataract and the need for laser capsulotomy were determined. After a literature review, data polishing, and expert consultation, 10 input variables were selected. The QUEST algorithm was used to develop a decision tree. Three back-propagation artificial neural networks were constructed with 4, 20, and 40 neurons in 2 hidden layers and trained with the same transfer functions (log-sigmoid and linear transfer) and training protocol with randomly selected eyes. They were then tested on the remaining eyes and the networks compared for their performance. Performance indices were used to compare resultant models with the results of logistic regression analysis. The models were trained using 282 randomly selected eyes and then tested using 70 eyes. Laser capsulotomy for clinically significant PCO was indicated or had been performed 2 years postoperatively in 40 eyes. A sample decision tree was produced with accuracy of 50% (likelihood ratio 0.8). The best artificial neural network, which showed 87% accuracy and a positive likelihood ratio of 8, was achieved with 40 neurons. The area under the receiver-operating-characteristic curve was 0.71. In comparison, logistic regression reached accuracy of 80%; however, the likelihood ratio was not measurable because the sensitivity was zero. A prototype artificial neural network was developed that predicted posterior capsule status (requiring capsulotomy) with reasonable accuracy. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Simental-Mendía, Luis E; Simental-Mendía, Esteban; Rodríguez-Hernández, Heriberto; Rodríguez-Morán, Martha; Guerrero-Romero, Fernando
2016-01-01
Introduction and aim. Given that early identification of non-alcoholic fatty liver disease (NAFLD) is an important issue for primary prevention of hepatic disease, the objectives of this study were to evaluate the efficacy of the product of triglyceride and glucose levels (TyG) for screening simple steatosis and non-alcoholic steatohepatitis (NASH) in asymptomatic women, and to compare its efficacy vs. other biomarkers for recognizing NAFLD. Asymptomatic women aged 20 to 65 years were enrolled into a cross-sectional study. The optimal values of TyG, for screening simple steatosis and NASH were established on a Receiver Operating Characteristic scatter plot; the sensitivity, specificity, and likelihood ratios of TyG index were estimated versus liver biopsy. According sensitivity and specificity, the efficacy of TyG was compared versus the well-known clinical biomarkers for recognizing NAFLD. A total of 50 asymptomatic women were enrolled. The best cutoff point of TyG for screening simple steatosis was 4.58 (sensitivity 0.94, specificity 0.69); in addition, the best cutoff point of TyG index for screening NASH was 4.59 (sensitivity 0.87, specificity 0.69). The positive and negative likelihood ratios were 3.03 and 0.08 for simple steatosis, and 2.80 and 0.18 for NASH. As compared versus SteatoTest, NashTest, Fatty liver index, and Algorithm, the TyG showed to be the best test for screening. TyG has high sensitivity and low negative likelihood ratio; as compared with other clinical biomarkers, the TyG showed to be the best test for screening simple steatosis and NASH.
A guideline for the validation of likelihood ratio methods used for forensic evidence evaluation.
Meuwly, Didier; Ramos, Daniel; Haraksim, Rudolf
2017-07-01
This Guideline proposes a protocol for the validation of forensic evaluation methods at the source level, using the Likelihood Ratio framework as defined within the Bayes' inference model. In the context of the inference of identity of source, the Likelihood Ratio is used to evaluate the strength of the evidence for a trace specimen, e.g. a fingermark, and a reference specimen, e.g. a fingerprint, to originate from common or different sources. Some theoretical aspects of probabilities necessary for this Guideline were discussed prior to its elaboration, which started after a workshop of forensic researchers and practitioners involved in this topic. In the workshop, the following questions were addressed: "which aspects of a forensic evaluation scenario need to be validated?", "what is the role of the LR as part of a decision process?" and "how to deal with uncertainty in the LR calculation?". The questions: "what to validate?" focuses on the validation methods and criteria and "how to validate?" deals with the implementation of the validation protocol. Answers to these questions were deemed necessary with several objectives. First, concepts typical for validation standards [1], such as performance characteristics, performance metrics and validation criteria, will be adapted or applied by analogy to the LR framework. Second, a validation strategy will be defined. Third, validation methods will be described. Finally, a validation protocol and an example of validation report will be proposed, which can be applied to the forensic fields developing and validating LR methods for the evaluation of the strength of evidence at source level under the following propositions. Copyright © 2016. Published by Elsevier B.V.
Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.
Urban, Lawrence M; MacNeil, Brian J
2015-08-01
Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.
Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array
NASA Astrophysics Data System (ADS)
Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.
2018-03-01
Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.
Effect of a laboratory result pager on provider behavior in a neonatal intensive care unit.
Samal, L; Stavroudis, Ta; Miller, Re; Lehmann, Hp; Lehmann, Cu
2011-01-01
A computerized laboratory result paging system (LRPS) that alerts providers about abnormal results ("push") may improve upon active laboratory result review ("pull"). However, implementing such a system in the intensive care setting may be hindered by low signal-to-noise ratio, which may lead to alert fatigue. To evaluate the impact of an LRPS in a Neonatal Intensive Care Unit. Utilizing paper chart review, we tallied provider orders following an abnormal laboratory result before and after implementation of an LRPS. Orders were compared with a predefined set of appropriate orders for such an abnormal result. The likelihood of a provider response in the post-implementation period as compared to the pre-implementation period was analyzed using logistic regression. The provider responses were analyzed using logistic regression to control for potential confounders. The likelihood of a provider response to an abnormal laboratory result did not change significantly after implementation of an LRPS. (Odds Ratio 0.90, 95% CI 0.63-1.30, p-value 0.58) However, when providers did respond to an alert, the type of response was different. The proportion of repeat laboratory tests increased. (26/378 vs. 7/278, p-value = 0.02). Although the laboratory result pager altered healthcare provider behavior in the Neonatal Intensive Care Unit, it did not increase the overall likelihood of provider response.
Phase History Decomposition for efficient Scatterer Classification in SAR Imagery
2011-09-15
frequency. Professor Rick Martin provided key advice on frequency parameter estimation and the relationship between likelihood ratio testing and the least...132 6.1.1 Imaging Error Due to Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 135 6.2 Subwindow Design and Weighting... test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 MF matched filter
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
Capturing and Displaying Uncertainty in the Common Tactical/Environmental Picture
2003-09-30
multistatic active detection, and incorporated this characterization into a Bayesian track - before - detect system called, the Likelihood Ratio Tracker (LRT...prediction uncertainty in a track before detect system for multistatic active sonar. The approach has worked well on limited simulation data. IMPACT
Effects of Methamphetamine on Vigilance and Tracking during Extended Wakefulness.
1993-09-01
the log likelihood ratio (log(p); Green & Swets, 1966; Macmillan & Creelman , 1990), was also derived from hit and false-alarm probabilities...vigilance task. Canadian Journal of Psychology, 19, 104-110. Macmillan, N.E., & Creelman , C.D. (1990). Response bias: Characteristics of detection
1981-08-01
RATIO TEST STATISTIC FOR SPHERICITY OF COMPLEX MULTIVARIATE NORMAL DISTRIBUTION* C. Fang P. R. Krishnaiah B. N. Nagarsenker** August 1981 Technical...and their applications in time sEries, the reader is referred to Krishnaiah (1976). Motivated by the applications in the area of inference on multiple...for practical purposes. Here, we note that Krishnaiah , Lee and Chang (1976) approxi- mated the null distribution of certain power of the likeli
Localizing multiple X chromosome-linked retinitis pigmentosa loci using multilocus homogeneity tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ott, J.; Terwilliger, J.D.; Bhattacharya, S.
1990-01-01
Multilocus linkage analysis of 62 family pedigrees with X chromosome-linked retinitis pigmentosa (XLRP) was undertaken to determine the presence of possible multiple disease loci and to reliability estimate their map location. Multilocus homogeneity tests furnish convincing evidence for the presence of two XLRP loci, the likelihood ratio being 6.4 {times} 10{sup 9}:1 in a favor of two versus a single XLRP locus and gave accurate estimates for their map location. In 60-75% of the families, location of an XLRP gene was estimated at 1 centimorgan distal to OTC, and in 25-40% of the families, an XLRP locus was located halfwaymore » between DXS14 (p58-1) and DXZ1 (Xcen), with an estimated recombination fraction of 25% between the two XLRP loci. There is also good evidence for third XLRP locus, midway between DXS28 (C7) and DXS164 (pERT87), supported by a likelihood ratio of 293:1 for three versus two XLRP loci.« less
De March, I; Sironi, E; Taroni, F
2016-09-01
Analysis of marks recovered from different crime scenes can be useful to detect a linkage between criminal cases, even though a putative source for the recovered traces is not available. This particular circumstance is often encountered in the early stage of investigations and thus, the evaluation of evidence association may provide useful information for the investigators. This association is evaluated here from a probabilistic point of view: a likelihood ratio based approach is suggested in order to quantify the strength of the evidence of trace association in the light of two mutually exclusive propositions, namely that the n traces come from a common source or from an unspecified number of sources. To deal with this kind of problem, probabilistic graphical models are used, in form of Bayesian networks and object-oriented Bayesian networks, allowing users to intuitively handle with uncertainty related to the inferential problem. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
2014-01-01
Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298
NASA Astrophysics Data System (ADS)
Coelho, Carlos A.; Marques, Filipe J.
2013-09-01
In this paper the authors combine the equicorrelation and equivariance test introduced by Wilks [13] with the likelihood ratio test (l.r.t.) for independence of groups of variables to obtain the l.r.t. of block equicorrelation and equivariance. This test or its single block version may find applications in many areas as in psychology, education, medicine, genetics and they are important "in many tests of multivariate analysis, e.g. in MANOVA, Profile Analysis, Growth Curve analysis, etc" [12, 9]. By decomposing the overall hypothesis into the hypotheses of independence of groups of variables and the hypothesis of equicorrelation and equivariance we are able to obtain the expressions for the overall l.r.t. statistic and its moments. From these we obtain a suitable factorization of the characteristic function (c.f.) of the logarithm of the l.r.t. statistic, which enables us to develop highly manageable and precise near-exact distributions for the test statistic.
Youngstrom, Eric A
2014-03-01
To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.
Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.
Morrison, Geoffrey Stewart; Poh, Norman
2018-05-01
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Wang, Lily Lee
1973-01-01
Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.
Ab initio solution of macromolecular crystal structures without direct methods.
McCoy, Airlie J; Oeffner, Robert D; Wrobel, Antoni G; Ojala, Juha R M; Tryggvason, Karl; Lohkamp, Bernhard; Read, Randy J
2017-04-04
The majority of macromolecular crystal structures are determined using the method of molecular replacement, in which known related structures are rotated and translated to provide an initial atomic model for the new structure. A theoretical understanding of the signal-to-noise ratio in likelihood-based molecular replacement searches has been developed to account for the influence of model quality and completeness, as well as the resolution of the diffraction data. Here we show that, contrary to current belief, molecular replacement need not be restricted to the use of models comprising a substantial fraction of the unknown structure. Instead, likelihood-based methods allow a continuum of applications depending predictably on the quality of the model and the resolution of the data. Unexpectedly, our understanding of the signal-to-noise ratio in molecular replacement leads to the finding that, with data to sufficiently high resolution, fragments as small as single atoms of elements usually found in proteins can yield ab initio solutions of macromolecular structures, including some that elude traditional direct methods.
Micheyl, Christophe; Dai, Huanping
2010-01-01
The equal-variance Gaussian signal-detection-theory (SDT) decision model for the dual-pair change-detection (or “4IAX”) paradigm has been described in earlier publications. In this note, we consider the equal-variance Gaussian SDT model for the related dual-pair AB vs BA identification paradigm. The likelihood ratios, optimal decision rules, receiver operating characteristics (ROCs), and relationships between d' and proportion-correct (PC) are analyzed for two special cases: that of statistically independent observations, which is likely to apply in constant-stimuli experiments, and that of highly correlated observations, which is likely to apply in experiments where stimuli are roved widely across trials or pairs. A surprising outcome of this analysis is that although these two situations lead to different optimal decision rules, the predicted ROCs and proportions of correct responses (PCs) for these two cases are not substantially different, and are either identical or similar to those observed in the basic Yes-No paradigm. PMID:19633356
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
NASA Astrophysics Data System (ADS)
Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.
2017-06-01
In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.
Sousa, Carlos Augusto Moreira de; Bahia, Camila Alves; Constantino, Patrícia
2016-12-01
Brazil has the sixth largest bicycles fleet in the world and bicycle is the most used individual transport vehicle in the country. Few studies address the issue of cyclists' accidents and factors that contribute to or prevent this event. VIVA is a cross-sectional survey and is part of the Violence and Accidents Surveillance System, Brazilian Ministry of Health. We used complex sampling and subsequent data review through multivariate logistic regression and calculation of the respective odds ratios. Odds ratios showed greater likelihood of cyclists' accidents in males, people with less schooling and living in urban and periurban areas. People who were not using the bike to go to work were more likely to suffer an accident. The profile found in this study corroborates findings of other studies. They claim that the coexistence of cyclists and other means of transportation in the same urban space increases the likelihood of accidents. The construction of bicycle-exclusive spaces and educational campaigns are required.
Pan, Hui; Ba-Thein, William
2018-01-01
Global Pharma Health Fund (GPHF) Minilab™, a semi-quantitative thin-layer chromatography (TLC)-based commercially available test kit, is widely used in drug quality surveillance globally, but its diagnostic accuracy is unclear. We investigated the diagnostic accuracy of Minilab system for antimicrobials, using high-performance liquid chromatography (HPLC) as reference standard. Following the Minilab protocols and the Pharmacopoeia of the People's Republic of China protocols, Minilab-TLC and HPLC were used to test five common antimicrobials (506 batches) for relative concentration of active pharmaceutical ingredients. The prevalence of poor-quality antimicrobials determined, respectively, by Minilab TLC and HPLC was amoxicillin (0% versus 14.9%), azithromycin (0% versus 17.4%), cefuroxime axetil (14.3% versus 0%), levofloxacin (0% versus 3.0%), and metronidazole (0% versus 38.0%). The Minilab TLC had false-positive and false-negative detection rates of 2.6% (13/506) and 15.2% (77/506) accordingly, resulting in the following test characteristics: sensitivity 0%, specificity 97.0%, positive predictive value 0, negative predictive value 0.8, positive likelihood ratio 0, negative likelihood ratio 1.0, diagnostic odds ratio 0, and adjusted diagnostic odds ratio 0.2. This study demonstrates unsatisfying diagnostic accuracy of Minilab system in screening poor-quality antimicrobials of common use. Using Minilab as a stand-alone system for monitoring drug quality should be reconsidered.
The Hypothesis-Driven Physical Examination.
Garibaldi, Brian T; Olson, Andrew P J
2018-05-01
The physical examination remains a vital part of the clinical encounter. However, physical examination skills have declined in recent years, in part because of decreased time at the bedside. Many clinicians question the relevance of physical examinations in the age of technology. A hypothesis-driven approach to teaching and practicing the physical examination emphasizes the performance of maneuvers that can alter the likelihood of disease. Likelihood ratios are diagnostic weights that allow clinicians to estimate the post-probability of disease. This hypothesis-driven approach to the physical examination increases its value and efficiency, while preserving its cultural role in the patient-physician relationship. Copyright © 2017 Elsevier Inc. All rights reserved.
An ERTS-1 investigation for Lake Ontario and its basin
NASA Technical Reports Server (NTRS)
Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.
1975-01-01
The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
Kamerman, Peter R.; Veliotes, Demetri G. A.; Phillips, Tudor J.; Asboe, David; Boffito, Marta; Rice, Andrew S. C.
2016-01-01
HIV-associated sensory peripheral neuropathy (HIV-SN) afflicts approximately 50% of patients on antiretroviral therapy, and is associated with significant neuropathic pain. Simple accurate diagnostic instruments are required for clinical research and daily practice in both high- and low-resource setting. A 4-item clinical tool (CHANT: Clinical HIV-associated Neuropathy Tool) assessing symptoms (pain and numbness) and signs (ankle reflexes and vibration sense) was developed by selecting and combining the most accurate measurands from a deep phenotyping study of HIV positive people (Pain In Neuropathy Study–HIV-PINS). CHANT was alpha-tested in silico against the HIV-PINS dataset and then clinically validated and field-tested in HIV-positive cohorts in London, UK and Johannesburg, South Africa. The Utah Early Neuropathy Score (UENS) was used as the reference standard in both settings. In a second step, neuropathic pain in the presence of HIV-SN was assessed using the Douleur Neuropathique en 4 Questions (DN4)-interview and a body map. CHANT achieved high accuracy on alpha-testing with sensitivity and specificity of 82% and 90%, respectively. In 30 patients in London, CHANT diagnosed 43.3% (13/30) HIV-SN (66.7% with neuropathic pain); sensitivity = 100%, specificity = 85%, and likelihood ratio = 6.7 versus UENS, internal consistency = 0.88 (Cronbach alpha), average item-total correlation = 0.73 (Spearman’s Rho), and inter-tester concordance > 0.93 (Spearman’s Rho). In 50 patients in Johannesburg, CHANT diagnosed 66% (33/50) HIV-SN (78.8% neuropathic pain); sensitivity = 74.4%, specificity = 85.7%, and likelihood ratio = 5.29 versus UENS. A positive CHANT score markedly increased of pre- to post-test clinical certainty of HIV-SN from 43% to 83% in London, and from 66% to 92% in Johannesburg. In conclusion, a combination of four easily and quickly assessed clinical items can be used to accurately diagnose HIV-SN. DN4-interview used in the context of bilateral feet pain can be used to identify those with neuropathic pain. PMID:27764177
Reyes-Valdés, M H; Stelly, D M
1995-01-01
Frequencies of meiotic configurations in cytogenetic stocks are dependent on chiasma frequencies in segments defined by centromeres, breakpoints, and telomeres. The expectation maximization algorithm is proposed as a general method to perform maximum likelihood estimations of the chiasma frequencies in the intervals between such locations. The estimates can be translated via mapping functions into genetic maps of cytogenetic landmarks. One set of observational data was analyzed to exemplify application of these methods, results of which were largely concordant with other comparable data. The method was also tested by Monte Carlo simulation of frequencies of meiotic configurations from a monotelodisomic translocation heterozygote, assuming six different sample sizes. The estimate averages were always close to the values given initially to the parameters. The maximum likelihood estimation procedures can be extended readily to other kinds of cytogenetic stocks and allow the pooling of diverse cytogenetic data to collectively estimate lengths of segments, arms, and chromosomes. Images Fig. 1 PMID:7568226
A method for estimating fall adult sex ratios from production and survival data
Wight, H.M.; Heath, R.G.; Geis, A.D.
1965-01-01
This paper presents a method of utilizing data relating to the production and survival of a bird population to estimate a basic fall adult sex ratio. This basic adult sex ratio is an average value derived from average production and survival rates. It is an estimate of the average sex ratio about which the fall adult ratios will fluctuate according to annual variations in production and survival. The basic fall adult sex ratio has been calculated as an asymptotic value which is the limit of an infinite series wherein average population characteristics are used as constants. Graphs are provided that allow the determination of basic sex ratios from production and survival data of a population. Where the respective asymptote has been determined, it may be possible to estimate various production and survival rates by use of variations of the formula for estimating the asymptote.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2016-12-01
Bayesian multimodel inference is increasingly being used in hydrology. Estimating Bayesian model evidence (BME) is of central importance in many Bayesian multimodel analysis such as Bayesian model averaging and model selection. BME is the overall probability of the model in reproducing the data, accounting for the trade-off between the goodness-of-fit and the model complexity. Yet estimating BME is challenging, especially for high dimensional problems with complex sampling space. Estimating BME using the Monte Carlo numerical methods is preferred, as the methods yield higher accuracy than semi-analytical solutions (e.g. Laplace approximations, BIC, KIC, etc.). However, numerical methods are prone the numerical demons arising from underflow of round off errors. Although few studies alluded to this issue, to our knowledge this is the first study that illustrates these numerical demons. We show that the precision arithmetic can become a threshold on likelihood values and Metropolis acceptance ratio, which results in trimming parameter regions (when likelihood function is less than the smallest floating point number that a computer can represent) and corrupting of the empirical measures of the random states of the MCMC sampler (when using log-likelihood function). We consider two of the most powerful numerical estimators of BME that are the path sampling method of thermodynamic integration (TI) and the importance sampling method of steppingstone sampling (SS). We also consider the two most widely used numerical estimators, which are the prior sampling arithmetic mean (AS) and posterior sampling harmonic mean (HM). We investigate the vulnerability of these four estimators to the numerical demons. Interesting, the most biased estimator, namely the HM, turned out to be the least vulnerable. While it is generally assumed that AM is a bias-free estimator that will always approximate the true BME by investing in computational effort, we show that arithmetic underflow can hamper AM resulting in severe underestimation of BME. TI turned out to be the most vulnerable, resulting in BME overestimation. Finally, we show how SS can be largely invariant to rounding errors, yielding the most accurate and computational efficient results. These research results are useful for MC simulations to estimate Bayesian model evidence.
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Climate change and the detection of trends in annual runoff
McCabe, G.J.; Wolock, D.M.
1997-01-01
This study examines the statistical likelihood of detecting a trend in annual runoff given an assumed change in mean annual runoff, the underlying year-to-year variability in runoff, and serial correlation of annual runoff. Means, standard deviations, and lag-1 serial correlations of annual runoff were computed for 585 stream gages in the conterminous United States, and these statistics were used to compute the probability of detecting a prescribed trend in annual runoff. Assuming a linear 20% change in mean annual runoff over a 100 yr period and a significance level of 95%, the average probability of detecting a significant trend was 28% among the 585 stream gages. The largest probability of detecting a trend was in the northwestern U.S., the Great Lakes region, the northeastern U.S., the Appalachian Mountains, and parts of the northern Rocky Mountains. The smallest probability of trend detection was in the central and southwestern U.S., and in Florida. Low probabilities of trend detection were associated with low ratios of mean annual runoff to the standard deviation of annual runoff and with high lag-1 serial correlation in the data.
NASA Astrophysics Data System (ADS)
Rochette, P.
1994-12-01
In their letter Lorio et al. (1993) recently explored the likelihood that the deflection with respect to present day magnetic North of dipolar lower crustal magnetic anomalies are caused by an induced magnetization deflected by strong anisotropy of magnetic susceptibility (AMS) rather than the usual explanation of an ancient natural remanent magnetization of a rotated body. Such an alternative would solve the theoretical problems raised by the stability of Natural Remanent Magnetization (NRM) at high temperature in the usually coarse grained magnetite bearing source rocks necessary to create large magnetic anomalies (Shive, 1989). They present a case study of two deep anomalies in southern Italy where the deflection is 30 to 40 deg. From a model of an anisotropic cubic source and an AMS dataset from representative deep crustal rocks from various part of the world, they conclude that no significant deflection of anomaly axis can be due to the average anisotropy ratio P(prime) = 1.5 observed in the dataset.
Makeyev, Oleksandr; Liu, Xiang; Koka, Kanthaiah; Kay, Steven M; Besio, Walter G
2011-01-01
As epilepsy affects approximately one percent of the world population, electrical stimulation of the brain has recently shown potential for additive seizure control therapy. In this study we applied noninvasive transcranial focal stimulation (TFS) via concentric ring electrodes on the scalp of rats after inducing seizures with pentylenetetrazole (PTZ) to assess the effect of TFS on the electrographic activity. Grand average power spectral densities were calculated to compare different stages of seizure development. They showed a significant difference between the TFS treated group and the control group. In case of the TFS treated group, after TFS, the power spectral density was reduced further towards a pre-seizure "baseline" than it was for the control group. The difference is the most drastic in delta, theta and alpha frequency bands. Application of general likelihood ratio test showed that TFS significantly (p<0.001) reduced the power of electrographic seizure activity in the TFS treated group compared to controls in more than 86% of the cases. These results suggest that TFS may have an anticonvulsant effect.
Variations in the OM/OC ratio of urban organic aerosol next to a major roadway.
Brown, Steven G; Lee, Taehyoung; Roberts, Paul T; Collett, Jeffrey L
2013-12-01
Understanding the organic matter/organic carbon (OM/OC) ratio in ambient particulate matter (PM) is critical to achieve mass closure in routine PM measurements, to assess the sources of and the degree of chemical processing organic aerosol particles have undergone, and to relate ambient pollutant concentrations to health effects. Of particular interest is how the OM/OC ratio varies in the urban environment, where strong spatial and temporal gradients in source emissions are common. We provide results of near-roadway high-time-resolution PM1 OM concentration and OM/OC ratio observations during January 2008 at Fyfe Elementary School in Las Vegas, NV, 18 m from the U.S. 95 freeway soundwall, measured with an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-AMS). The average OM/OC ratio was 1.54 (+/- 0.20 standard deviation), typical of environments with a low amount of secondary aerosol formation. The 2-min average OM/OC ratios varied between 1.17 and 2.67, and daily average OM/OC ratios varied between 1.44 and 1.73. The ratios were highest during periods of low OM concentrations and generally low during periods of high OM concentrations. OM/OC ratios were low (1.52 +/- 0.14, on average) during the morning rush hour (average OM = 2.4 microg/m3), when vehicular emissions dominate this near-road measurement site. The ratios were slightly lower (1.46 +/- 0.10) in the evening (average OM = 6.3 microg/m3), when a combination of vehicular and fresh residential biomass burning emissions was typically present during times with temperature inversions. The hourly averaged OM/OC ratio peaked at 1.66 at midday. OM concentrations were similar regardless of whether the monitoring site was downwind or upwind of the adjacent freeway throughout the day, though they were higher during stagnant conditions (wind speed < 0.5 m/sec). The OM/OC ratio generally varied more with time of day than with wind direction and speed.
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Two-Year Impacts of Opportunity NYC by Families' Likelihood of Earning Rewards
ERIC Educational Resources Information Center
Berg, Juliette; Morris, Pamela; Aber, J. Lawrence
2011-01-01
Experimental approaches can help disentangle the impacts of policies from the effects of individual characteristics, but the heterogeneity of implementation inherent in studies with complex program designs may mask average treatment impacts (Morris & Hendra, 2009). In the case of the Opportunity NYC-Family Rewards (ONYC-Family Rewards),…
Undergraduate Financial Aid and Subsequent Giving Behavior. Discussion Paper.
ERIC Educational Resources Information Center
Dugan, Kelly; Mullin, Charles H.; Siegfried, John J.
Data on 2,822 Vanderbilt University graduates were used to investigate alumni giving behavior during the 8 years after graduation. A two-stage model accounting for individual truncation was used first to estimate the likelihood of making a contribution and second to estimate the average gift size conditional on contributing. The type of financial…
Online Course-Taking and Student Outcomes in California Community Colleges
ERIC Educational Resources Information Center
Hart, Cassandra M. D.; Friedmann, Elizabeth; Hill, Michael
2018-01-01
This paper uses fixed effects analyses to estimate differences in student performance under online versus face-to-face course delivery formats in the California Community College system. On average, students have poorer outcomes in online courses in terms of the likelihood of course completion, course completion with a passing grade, and receiving…
PREDICTING ACADEMIC SUCCESS BEYOND HIGH SCHOOL.
ERIC Educational Resources Information Center
JEX, FRANK B.
THESE TABLES ARE INTENDED TO PREDICT WHICH UTAH COLLEGE CURRICULUM GIVES A STUDENT THE MOST LIKELIHOOD OF SUCCESS. THEY USE HIGH SCHOOL AVERAGE (HSA) AND ACADEMIC ACHIEVEMENT OR APTITUDE TESTS. THE STUDY IS DESIGNED ON CONCLUSIONS FROM EARLIER WORK--(1) THE MAIN HURDLE FOR THE FRESHMAN IS THE REQUIRED GENERAL EDUCATION CORE, (2) GPA'S ARE…
Empirical likelihood method for non-ignorable missing data problems.
Guan, Zhong; Qin, Jing
2017-01-01
Missing response problem is ubiquitous in survey sampling, medical, social science and epidemiology studies. It is well known that non-ignorable missing is the most difficult missing data problem where the missing of a response depends on its own value. In statistical literature, unlike the ignorable missing data problem, not many papers on non-ignorable missing data are available except for the full parametric model based approach. In this paper we study a semiparametric model for non-ignorable missing data in which the missing probability is known up to some parameters, but the underlying distributions are not specified. By employing Owen (1988)'s empirical likelihood method we can obtain the constrained maximum empirical likelihood estimators of the parameters in the missing probability and the mean response which are shown to be asymptotically normal. Moreover the likelihood ratio statistic can be used to test whether the missing of the responses is non-ignorable or completely at random. The theoretical results are confirmed by a simulation study. As an illustration, the analysis of a real AIDS trial data shows that the missing of CD4 counts around two years are non-ignorable and the sample mean based on observed data only is biased.
Tree-Based Global Model Tests for Polytomous Rasch Models
ERIC Educational Resources Information Center
Komboz, Basil; Strobl, Carolin; Zeileis, Achim
2018-01-01
Psychometric measurement models are only valid if measurement invariance holds between test takers of different groups. Global model tests, such as the well-established likelihood ratio (LR) test, are sensitive to violations of measurement invariance, such as differential item functioning and differential step functioning. However, these…
Medicaid capital reimbursement policy and environmental artifacts of nursing home culture change.
Miller, Susan C; Cohen, Neal; Lima, Julie C; Mor, Vincent
2014-02-01
To examine how Medicaid capital reimbursement policy is associated with nursing homes (NHs) having high proportions of private rooms and small households. Through a 2009/2010 NH national survey, we identified NHs having small households and high proportions of private rooms (≥76%). A survey of state Medicaid officials and policy document review provided 2009 policy data. Facility- and county-level covariates were from Online Survey, Certification and Reporting, the Area Resource File, and aggregated resident assessment data (minimum data set). The policy of interest was the presence of traditional versus fair rental capital reimbursement policy. Average Medicaid per diem rates and the presence of NH pay-for-performance (p4p) reimbursement were also examined. A total of 1,665 NHs in 40 states were included. Multivariate logistic regression analyses (with clustering on states) were used. In multivariate models, Medicaid capital reimbursement policy was not significantly associated with either outcome. However, there was a significantly greater likelihood of NHs having many private rooms when states had higher Medicaid rates (per $10 increment; adjusted odds ratio [AOR] 1.13; 95% CI 1.049, 1.228), and in states with versus without p4p (AOR 1.78; 95% CI 1.045, 3.036). Also, in states with p4p NHs had a greater likelihood of having small households (AOR 1.78; 95% CI 1.045, 3.0636). Higher NH Medicaid rates and reimbursement incentives may contribute to a higher presence of 2 important environmental artifacts of culture change-an abundance of private rooms and small households. However, longitudinal research examining policy change is needed to establish the cause and effect of the associations observed.
Fan, L; Liu, S-Y; Li, Q-C; Yu, H; Xiao, X-S
2012-01-01
Objective To evaluate different features between benign and malignant pulmonary focal ground-glass opacity (fGGO) on multidetector CT (MDCT). Methods 82 pathologically or clinically confirmed fGGOs were retrospectively analysed with regard to demographic data, lesion size and location, attenuation value and MDCT features including shape, margin, interface, internal characteristics and adjacent structure. Differences between benign and malignant fGGOs were analysed using a χ2 test, Fisher's exact test or Mann–Whitney U-test. Morphological characteristics were analysed by binary logistic regression analysis to estimate the likelihood of malignancy. Results There were 21 benign and 61 malignant lesions. No statistical differences were found between benign and malignant fGGOs in terms of demographic data, size, location and attenuation value. The frequency of lobulation (p=0.000), spiculation (p=0.008), spine-like process (p=0.004), well-defined but coarse interface (p=0.000), bronchus cut-off (p=0.003), other air-containing space (p=0.000), pleural indentation (p=0.000) and vascular convergence (p=0.006) was significantly higher in malignant fGGOs than that in benign fGGOs. Binary logistic regression analysis showed that lobulation, interface and pleural indentation were important indicators for malignant diagnosis of fGGO, with the corresponding odds ratios of 8.122, 3.139 and 9.076, respectively. In addition, a well-defined but coarse interface was the most important indicator of malignancy among all interface types. With all three important indicators considered, the diagnostic sensitivity, specificity and accuracy were 93.4%, 66.7% and 86.6%, respectively. Conclusion An fGGO with lobulation, a well-defined but coarse interface and pleural indentation gives a greater than average likelihood of being malignant. PMID:22128130
Firearm Ownership and Acquisition Among Parents With Risk Factors for Self-Harm or Other Violence.
Ladapo, Joseph A; Elliott, Marc N; Kanouse, David E; Schwebel, David C; Toomey, Sara L; Mrug, Sylvie; Cuccaro, Paula M; Tortolero, Susan R; Schuster, Mark A
Recent policy initiatives aiming to reduce firearm morbidity focus on mental health and illness. However, few studies have simultaneously examined mental health and behavioral predictors within families, or their longitudinal association with newly acquiring a firearm. Population-based, longitudinal survey of 4251 parents of fifth-grade students in 3 US metropolitan areas; 2004 to 2011. Multivariate logistic models were used to assess associations between owning or acquiring a firearm and parent mental illness and substance use. Ninety-three percent of parents interviewed were women. Overall, 19.6% of families reported keeping a firearm in the home. After adjustment for confounders, history of depression (adjusted odds ratio [aOR], 1.36; 95% confidence interval [CI], 1.04-1.77), binge drinking (aOR 1.75; 95% CI, 1.14-2.68), and illicit drug use (aOR 1.75; 95% CI, 1.12-2.76) were associated with a higher likelihood of keeping a firearm in the home. After a mean of 3.1 years, 6.1% of parents who did not keep a firearm in the home at baseline acquired one by follow-up and kept it in the home (average annual likelihood = 2.1%). No risk factors for self-harm or other violence were associated with newly acquiring a gun in the home. Families with risk factors for self-harm or other violence have a modestly greater probability of having a firearm in the home compared with families without risk factors, and similar probability of newly acquiring a firearm. Treatment interventions for many of these risk factors might reduce firearm-related morbidity. Copyright © 2016 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
2004-03-01
Allison , Logistic Regression: Using the SAS System (Cary, NC: SAS Institute, Inc, 2001), 57. 23 using the likelihood ratio that SAS generates...21, respectively. 33 Jesse M. Rothstein, College Performance Predictions and the SAT ( Berkely , CA: UC
On the use of the likelihood ratio for forensic evaluation: response to Fenton et al.
Biedermann, Alex; Hicks, Tacha; Taroni, Franco; Champod, Christophe; Aitken, Colin
2014-07-01
This letter to the Editor comments on the article When 'neutral' evidence still has probative value (with implications from the Barry George Case) by N. Fenton et al. [[1], 2014]. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
Ramsay-Curve Differential Item Functioning
ERIC Educational Resources Information Center
Woods, Carol M.
2011-01-01
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Diffuse Prior Monotonic Likelihood Ratio Test for Evaluation of Fused Image Quality Measures
2011-02-01
852–864. [25] W. Mendenhall , R. L. Scheaffer, and D. D. Wackerly, Mathematical Statistics With Applications, 3rd ed. Boston, MA: Duxbury Press, 1986...Professor and holds the Robert W. Wieseman Chaired Research Professorship in Electrical Engi- neering. His research interests include signal
Stochastic Ordering Using the Latent Trait and the Sum Score in Polytomous IRT Models.
ERIC Educational Resources Information Center
Hemker, Bas T.; Sijtsma, Klaas; Molenaar, Ivo W.; Junker, Brian W.
1997-01-01
Stochastic ordering properties are investigated for a broad class of item response theory (IRT) models for which the monotone likelihood ratio does not hold. A taxonomy is given for nonparametric and parametric models for polytomous models based on the hierarchical relationship between the models. (SLD)
Human Behavior Drift Detection in a Smart Home Environment.
Masciadri, Andrea; Trofimova, Anna A; Matteucci, Matteo; Salice, Fabio
2017-01-01
The proposed system aims at elderly people independent living by providing an early indicator of habits changes which might be relevant for a diagnosis of diseases. It relies on Hidden Markov Model to describe the behavior observing sensors data, while Likelihood Ratio Test gives the variation within different time periods.
Sensitivity of Fit Indices to Misspecification in Growth Curve Models
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.
2010-01-01
This study investigated the sensitivity of fit indices to model misspecification in within-individual covariance structure, between-individual covariance structure, and marginal mean structure in growth curve models. Five commonly used fit indices were examined, including the likelihood ratio test statistic, root mean square error of…
Logistic Approximation to the Normal: The KL Rationale
ERIC Educational Resources Information Center
Savalei, Victoria
2006-01-01
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…
RELATIONSHIP FORMATION AND STABILITY IN EMERGING ADULTHOOD: DO SEX RATIOS MATTER?
Warner, Tara D.; Manning, Wendy D.; Giordano, Peggy C.; Longmore, Monica A.
2013-01-01
Research links sex ratios with the likelihood of marriage and divorce. However, whether sex ratios similarly influence precursors to marriage—transitions in and out of dating or cohabiting relationships—is unknown. Utilizing data from the Toledo Adolescent Relationships Study (TARS) and the 2000 census, this study assesses whether sex ratios influence the formation and stability of emerging adults’ romantic relationships. Findings show that relationship formation is unaffected by partner availability, yet the presence of partners increases women’s odds of cohabiting, decreases men’s odds of cohabiting, and increases number of dating partners and cheating among men. It appears that sex ratios influence not only transitions in and out of marriage, but also the process through which individuals search for and evaluate partners prior to marriage. PMID:24265510
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
A readers' guide to the interpretation of diagnostic test properties: clinical example of sepsis.
Fischer, Joachim E; Bachmann, Lucas M; Jaeschke, Roman
2003-07-01
One of the most challenging practical and daily problems in intensive care medicine is the interpretation of the results from diagnostic tests. In neonatology and pediatric intensive care the early diagnosis of potentially life-threatening infections is a particularly important issue. A plethora of tests have been suggested to improve diagnostic decision making in the clinical setting of infection which is a clinical example used in this article. Several criteria that are critical to evidence-based appraisal of published data are often not adhered to during the study or in reporting. To enhance the critical appraisal on articles on diagnostic tests we discuss various measures of test accuracy: sensitivity, specificity, receiver operating characteristic curves, positive and negative predictive values, likelihood ratios, pretest probability, posttest probability, and diagnostic odds ratio. We suggest the following minimal requirements for reporting on the diagnostic accuracy of tests: a plot of the raw data, multilevel likelihood ratios, the area under the receiver operating characteristic curve, and the cutoff yielding the highest discriminative ability. For critical appraisal it is mandatory to report confidence intervals for each of these measures. Moreover, to allow comparison to the readers' patient population authors should provide data on study population characteristics, in particular on the spectrum of diseases and illness severity.
Henry, Brandon Michael; Roy, Joyeeta; Ramakrishnan, Piravin Kumar; Vikse, Jens; Tomaszewski, Krzysztof A; Walocha, Jerzy A
2016-07-01
Several studies have explored the use of serum procalcitonin (PCT) in differentiating between bacterial and viral etiologies in children with suspected meningitis. We pooled these studies into a meta-analysis to determine the PCT diagnostic accuracy. All major databases were searched through March 2015. No date or language restrictions were applied. Eight studies (n = 616 pediatric patients) were included. Serum PCT assay was found to be very accurate for differentiating the etiology of pediatric meningitis with pooled sensitivity and specificity of 0.96 (95% CI = 0.92-0.98) and 0.89 (95% CI = 0.86-0.92), respectively. The pooled positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR), and area under the curve (AUC) for PCT were 7.5 (95% CI = 5.6-10.1), 0.08(95% CI = 0.04-0.14), 142.3 (95% CI = 59.5-340.4), and 0.97 (SE = 0.01), respectively. In 6 studies, PCT was found to be superior than CRP, whose DOR was only 16.7 (95%CI = 8.8-31.7). Our meta-analysis demonstrates that serum PCT assay is a highly accurate and powerful test for rapidly differentiating between bacterial and viral meningitis in children. © The Author(s) 2015.
The Maximum Likelihood Solution for Inclination-only Data
NASA Astrophysics Data System (ADS)
Arason, P.; Levi, S.
2006-12-01
The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag
Model averaging in linkage analysis.
Matthysse, Steven
2006-06-05
Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Trifonov, A. P.; Korchagin, Yu. E.; Korol'kov, S. V.
2018-05-01
We synthesize the quasi-likelihood, maximum-likelihood, and quasioptimal algorithms for estimating the arrival time and duration of a radio signal with unknown amplitude and initial phase. The discrepancies between the hardware and software realizations of the estimation algorithm are shown. The characteristics of the synthesized-algorithm operation efficiency are obtained. Asymptotic expressions for the biases, variances, and the correlation coefficient of the arrival-time and duration estimates, which hold true for large signal-to-noise ratios, are derived. The accuracy losses of the estimates of the radio-signal arrival time and duration because of the a priori ignorance of the amplitude and initial phase are determined.
Generalized likelihood ratios for quantitative diagnostic test scores.
Tandberg, D; Deely, J J; O'Malley, A J
1997-11-01
The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
An Evaluation of the Effects of Variable Sampling on Component, Image, and Factor Analysis.
ERIC Educational Resources Information Center
Velicer, Wayne F.; Fava, Joseph L.
1987-01-01
Principal component analysis, image component analysis, and maximum likelihood factor analysis were compared to assess the effects of variable sampling. Results with respect to degree of saturation and average number of variables per factor were clear and dramatic. Differential effects on boundary cases and nonconvergence problems were also found.…
Land use surveys by means of automatic interpretation of LANDSAT system data
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Lombardo, M. A.; Novo, E. M. L. D.; Niero, M.; Foresti, C.
1981-01-01
Analyses for seven land-use classes are presented. The classes are: urban area, industrial area, bare soil, cultivated area, pastureland, reforestation, and natural vegetation. The automatic classification of LANDSAT MSS data using a maximum likelihood algorithm shows a 39% average error of emission and a 3.45 error of commission for the seven classes.
Mehler, W Tyler; Keough, Michael J; Pettigrove, Vincent
2018-04-01
Three common false-negative scenarios have been encountered with amendment addition in whole-sediment toxicity identification evaluations (TIEs): dilution of toxicity by amendment addition (i.e., not toxic enough), not enough amendment present to reduce toxicity (i.e., too toxic), and the amendment itself elicits a toxic response (i.e., secondary amendment effect). One such amendment in which all 3 types of false-negatives have been observed is with the nonpolar organic amendment (activated carbon or powdered coconut charcoal). The objective of the present study was to reduce the likelihood of encountering false-negatives with this amendment and to increase the value of the whole-sediment TIE bioassay. To do this, the present study evaluated the effects of various activated carbon additions to survival, growth, emergence, and mean development rate of Chironomus tepperi. Using this information, an alternative method for this amendment was developed which utilized a combination of multiple amendment addition ratios based on wet weight (1%, lower likelihood of the secondary amendment effect; 5%, higher reduction of contaminant) and nonconventional endpoints (emergence, mean development rate). This alternative method was then validated in the laboratory (using spiked sediments) and with contaminated field sediments. Using these multiple activated carbon ratios in combination with additional endpoints (namely, emergence) reduced the likelihood of all 3 types of false-negatives and provided a more sensitive evaluation of risk. Environ Toxicol Chem 2018;37:1219-1230. © 2017 SETAC. © 2017 SETAC.
He, Qiqi; Wang, Hanzhang; Kenyon, Jonathan; Liu, Guiming; Yang, Li; Tian, Junqiang; Yue, Zhongjin; Wang, Zhiping
2015-01-01
To use meta-analysis to determine the accuracy of percutaneous core needle biopsy in the diagnosis of small renal masses (SMRs ≤ 4.0 cm). Studies were identified by searching PubMed, Embase, and the Cochrane Library database up to March 2013. Two of the authors independently assessed the study quality using QUADAS-2 tool and extracted data that met the inclusion criteria. The sensitivity, specificity, likelihood ratios, diagnostic odds ratio (DOR) and also summary receiver operating characteristic (SROC) curve were investigated and draw. Deek's funnel plot was used to evaluate the publication bias. A total of 9 studies with 788 patients (803 biopsies) were included. Failed biopsies without repeated or aborted from follow-up/surgery result were excluded (232 patients and 353 biopsies). For all cases, the pooled sensitivity was 94.0% (95% CI: 91.0%, 95.0%), the pooled positive likelihood was 22.57 (95 % CI: 9.20-55.34), the pooled negative likelihood was 0.09 (95 % CI: 0.06-0.13), the pooled DOR was 296.52(95 % CI: 99. 42-884.38). The area under the curve of SROC analysis was 0.959 ± 0.0254. Imaging-guided percutaneous core needle biopsy of small renal masses (SMRs ≤ 4.0 cm) is highly accurate to malignant tumor diagnosis with unknown metastatic status and could be offered to some patients after clinic judgment prior to surgical intervention consideration.
Predicting In-State Workforce Retention After Graduate Medical Education Training.
Koehler, Tracy J; Goodfellow, Jaclyn; Davis, Alan T; Spybrook, Jessaca; vanSchagen, John E; Schuh, Lori
2017-02-01
There is a paucity of literature when it comes to identifying predictors of in-state retention of graduate medical education (GME) graduates, such as the demographic and educational characteristics of these physicians. The purpose was to use demographic and educational predictors to identify graduates from a single Michigan GME sponsoring institution, who are also likely to practice medicine in Michigan post-GME training. We included all residents and fellows who graduated between 2000 and 2014 from 1 of 18 GME programs at a Michigan-based sponsoring institution. Predictor variables identified by logistic regression with cross-validation were used to create a scoring tool to determine the likelihood of a GME graduate to practice medicine in the same state post-GME training. A 6-variable model, which included 714 observations, was identified. The predictor variables were birth state, program type (primary care versus non-primary care), undergraduate degree location, medical school location, state in which GME training was completed, and marital status. The positive likelihood ratio (+LR) for the scoring tool was 5.31, while the negative likelihood ratio (-LR) was 0.46, with an accuracy of 74%. The +LR indicates that the scoring tool was useful in predicting whether graduates who trained in a Michigan-based GME sponsoring institution were likely to practice medicine in Michigan following training. Other institutions could use these techniques to identify key information that could help pinpoint matriculating residents/fellows likely to practice medicine within the state in which they completed their training.
Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn
2016-01-01
An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.
Nelson, Winnie W; Desai, Sunita; Damaraju, Chandrasekharrao V; Lu, Lang; Fields, Larry E; Wildgoose, Peter; Schein, Jeffery R
2015-06-01
Maintaining stable levels of anticoagulation using warfarin therapy is challenging. Few studies have examined the stability of the international normalized ratio (INR) in patients with nonvalvular atrial fibrillation (NVAF) who have had ≥6 months' exposure to warfarin anticoagulation for stroke prevention. Our objective was to describe INR control in NVAF patients who had been receiving warfarin for at least 6 months. Using retrospective patient data from the CoagClinic™ database, we analyzed data from NVAF patients treated with warfarin to assess the quality of INR control and possible predictors of poor INR control. Time within, above, and below the recommended INR range (2.0-3.0) was calculated for patients who had received warfarin for ≥6 months and had three or more INR values. The analysis also assessed INR patterns and resource utilization of patients with an INR >4.0. Logistic regression models were used to determine factors associated with poor INR control. Patients (n = 9433) had an average of 1.6 measurements per 30 days. Mean follow-up time was 544 days. Approximately 39% of INR values were out of range, with 23% of INR values being <2.0 and 16% being >3.0. Mean percent time with INR in therapeutic range was 67%; INR <2.0 was 19% and INR >3.0 was 14%. Patients with more than one reading of INR >4.0 (~39%) required an average of one more visit and took 3 weeks to return to an in-range INR. Male sex and age >75 years were predictive of better INR control, whereas a history of heart failure or diabetes were predictive of out-of-range INR values. However, patient characteristics did not predict the likelihood of INR >4.0. Out-of-range INR values remain frequent in patients with NVAF treated with warfarin. Exposure to high INR values was common, resulting in increased resource utilization.
Occupational characteristics of cases with asbestos-related diseases in The Netherlands.
Burdorf, Alex; Dahhan, Mohssine; Swuste, Paul
2003-08-01
To describe the occupational background of cases with an asbestos-related disease and to present overall mesothelioma risks across industries with historical exposure to asbestos. For the period 1990-2000, cases were collected from records held by two law firms. Information on jobs held, previous employers, activities performed and specific products used were obtained from patients themselves or next of kin. Branches of industry and occupations were coded and the likelihood of asbestos exposure was assessed. For each branch of industry, the overall risk of mesothelioma was calculated from the ratio of the observed number of mesothelioma cases and the cumulative population-at-risk in the period 1947-1960. In order to compare mesothelioma risks across different industries, risk ratios were calculated for the primary asbestos industry and asbestos user industries relative to all other branches of industry. In total, 710 mesotheliomas and 86 asbestosis cases were available. The average latency period was approximately 40 yr and the average duration of exposure was 22 yr. Ship building and maintenance contributed the largest number of cases (27%), followed by the construction industry (14%), the insulation industry (12%), and the navy and army, primarily related to ship building and maintenance (5%). In the insulation industry, the overall risk of mesothelioma was 5 out of 100 workers, and in the ship building industry, 1 out of 100 workers. The construction industry had an overall risk comparable with many other asbestos-using industries (7 per 10,000 workers), but due to its size claimed many mesothelioma cases. The majority of cases with asbestos-related diseases had experienced their first asbestos exposure prior to 1960. For cases with first asbestos exposure after 1960, a shift was observed from the primary asbestos industry towards asbestos-using industries, such as construction, petroleum refining, and train building and maintenance. Due to the long latency period, asbestos exposure from 1960 to 1980 will cause a considerable number of mesothelioma cases in the next two decades.
Association between childhood sexual abuse and transactional sex in youth aging out of foster care.
Ahrens, Kym R; Katon, Wayne; McCarty, Carolyn; Richardson, Laura P; Courtney, Mark E
2012-01-01
To evaluate the association between history of childhood sexual abuse (CSA) and having transactional sex among adolescents who have been in foster care. We used an existing dataset of youth transitioning out of foster care. Independent CSA variables included self report of history of sexual molestation and rape when participants were, on average, 17 years of age. Our outcome variables were self-report of having transactional sex ever and in the past year, when participants were an average age of 19 years. Separate multiple logistic regression analyses were conducted to assess the associations between CSA variables and transactional sex variables. Initial analyses were performed on both genders; exploratory analyses were then performed evaluating each gender separately. Total N=732; 574 were included in the main analyses. History of sexual molestation was significantly associated with increased odds of having transactional sex, both ever and in the past year (OR [95% CI]: 3.21 [1.26-8.18] and 4.07 [1.33, 12.52], respectively). History of rape was also significantly associated with increased odds of having had transactional sex ever and in the past year (ORs [95% CI]: 3.62 [1.38-9.52] and 3.78 [1.19, 12.01], respectively). Odds ratios in female-only analyses remained significant and were larger in magnitude compared with the main, non-stratified analyses; odds ratios in male-only analyses were non-significant and smaller in magnitude when compared with the main analyses. Both CSA variables were associated with increased likelihood of transactional sex. This association appears to vary by gender. Our results suggest that policymakers for youth in foster care should consider the unique needs of young women with histories of CSA when developing programs to support healthy relationships. Health care providers should also consider adapting screening and counseling practices to reflect the increased risk of transactional sex for female youth in foster care with a history of CSA. Copyright © 2012 Elsevier Ltd. All rights reserved.
Anal signs of child sexual abuse: a case-control study.
Hobbs, Christopher J; Wright, Charlotte M
2014-05-27
There is uncertainty about the nature and specificity of physical signs following anal child sexual abuse. The study investigates the extent to which physical findings discriminate between children with and without a history of anal abuse. Retrospective case note review in a paediatric forensic unit. all eligible cases from 1990 to 2007 alleging anal abuse. all children examined anally from 1998 to 2007 with possible physical abuse or neglect with no identified concern regarding sexual abuse. Fisher's exact test (two-tailed) was performed to ascertain the significance of differences for individual signs between cases and controls. To explore the potential role of confounding, logistic regression was used to produce odds ratios adjusted for age and gender. A total of 184 cases (105 boys, 79 girls), average age 98.5 months (range 26 to 179) were compared with 179 controls (94 boys, 85 girls) average age 83.7 months (range 35-193). Of the cases 136 (74%) had one or more signs described in anal abuse, compared to 29 (16%) controls. 79 (43%) cases and 2 (1.1%) controls had >1 sign. Reflex anal dilatation (RAD) and venous congestion were seen in 22% and 36% of cases but <1% of controls (likelihood ratios (LR) 40, 60 respectively), anal fissure in 14% cases and 1.1% controls (LR 13), anal laxity in 27% cases and 3% controls (LR 10).Novel signs seen significantly more commonly in cases were anal fold changes, swelling and twitching. Erythema, swelling and fold changes were seen most commonly within 7 days of last reported contact; RAD, laxity, venous congestion, fissure and twitching were observed up to 6 months after the alleged assault. Anal findings are more common in children alleging anal abuse than in those presenting with physical abuse or neglect with no concern about sexual abuse. Multiple signs are rare in controls and support disclosed anal abuse.
Antoniou, K M; Margaritopoulos, G A; Goh, N S; Karagiannis, K; Desai, S R; Nicholson, A G; Siafakas, N M; Coghlan, J G; Denton, C P; Hansell, D M; Wells, A U
2016-04-01
To assess the prevalence of combined pulmonary fibrosis and emphysema (CPFE) in systemic sclerosis (SSc) patients with interstitial lung disease (ILD) and the effect of CPFE on the pulmonary function tests used to evaluate the severity of SSc-related ILD and the likelihood of pulmonary hypertension (PH). High-resolution computed tomography (HRCT) scans were obtained in 333 patients with SSc-related ILD and were evaluated for the presence of emphysema and the extent of ILD. The effects of emphysema on the associations between pulmonary function variables and the extent of SSc-related ILD as visualized on HRCT and echocardiographic evidence of PH were quantified. Emphysema was present in 41 (12.3%) of the 333 patients with SSc-related ILD, in 26 (19.7%) of 132 smokers, and in 15 (7.5%) of 201 lifelong nonsmokers. When the extent of fibrosis was taken into account, emphysema was associated with significant additional differences from the expected values for diffusing capacity for carbon monoxide (DLco) (average reduction of 24.1%; P < 0.0005), and the forced vital capacity (FVC)/DLco ratio (average increase of 34.8%; P < 0.0005) but not FVC. These effects were identical in smokers and nonsmokers. Multivariate analysis showed that the presence of emphysema had a greater effect than echocardiographically determined PH on the FVC/DLco ratio, regardless of whether it was analyzed as a continuous variable or using a threshold value of 1.6 or 2.0. Among patients with SSc-related ILD, emphysema is sporadically present in nonsmokers and is associated with a low pack-year history in smokers. The confounding effect of CPFE on measures of gas exchange has major implications for the construction of screening algorithms for PH in patients with SSc-related ILD. © 2016, American College of Rheumatology.
Anal signs of child sexual abuse: a case–control study
2014-01-01
Background There is uncertainty about the nature and specificity of physical signs following anal child sexual abuse. The study investigates the extent to which physical findings discriminate between children with and without a history of anal abuse. Methods Retrospective case note review in a paediatric forensic unit. Cases: all eligible cases from1990 to 2007 alleging anal abuse. Controls: all children examined anally from 1998 to 2007 with possible physical abuse or neglect with no identified concern regarding sexual abuse. Fisher’s exact test (two-tailed) was performed to ascertain the significance of differences for individual signs between cases and controls. To explore the potential role of confounding, logistic regression was used to produce odds ratios adjusted for age and gender. Results A total of 184 cases (105 boys, 79 girls), average age 98.5 months (range 26 to 179) were compared with 179 controls (94 boys, 85 girls) average age 83.7 months (range 35–193). Of the cases 136 (74%) had one or more signs described in anal abuse, compared to 29 (16%) controls. 79 (43%) cases and 2 (1.1%) controls had >1 sign. Reflex anal dilatation (RAD) and venous congestion were seen in 22% and 36% of cases but <1% of controls (likelihood ratios (LR) 40, 60 respectively), anal fissure in 14% cases and 1.1% controls (LR 13), anal laxity in 27% cases and 3% controls (LR 10). Novel signs seen significantly more commonly in cases were anal fold changes, swelling and twitching. Erythema, swelling and fold changes were seen most commonly within 7 days of last reported contact; RAD, laxity, venous congestion, fissure and twitching were observed up to 6 months after the alleged assault. Conclusions Anal findings are more common in children alleging anal abuse than in those presenting with physical abuse or neglect with no concern about sexual abuse. Multiple signs are rare in controls and support disclosed anal abuse. PMID:24884914
Weiss, Shennan A; Orosz, Iren; Salamon, Noriko; Moy, Stephanie; Wei, Linqing; Van ’t Klooster, Maryse A; Knight, Robert T; Harper, Ronald M; Bragin, Anatol; Fried, Itzhak; Engel, Jerome; Staba, Richard J
2016-01-01
Objective Ripples (80–150 Hz) recorded from clinical macroelectrodes have been shown to be an accurate biomarker of epileptogenic brain tissue. We investigated coupling between epileptiform spike phase and ripple amplitude to better understand the mechanisms that generate this type of pathological ripple (pRipple) event. Methods We quantified phase amplitude coupling (PAC) between epileptiform EEG spike phase and ripple amplitude recorded from intracranial depth macroelectrodes during episodes of sleep in 12 patients with mesial temporal lobe epilepsy. PAC was determined by 1) a phasor transform that corresponds to the strength and rate of ripples coupled with spikes, and a 2) ripple-triggered average to measure the strength, morphology, and spectral frequency of the modulating and modulated signals. Coupling strength was evaluated in relation to recording sites within and outside the seizure onset zone (SOZ). Results Both the phasor transform and ripple-triggered averaging methods showed ripple amplitude was often robustly coupled with epileptiform EEG spike phase. Coupling was more regularly found inside than outside the SOZ, and coupling strength correlated with the likelihood a macroelectrode’s location was within the SOZ (p<0.01). The ratio of the rate of ripples coupled with EEG spikes inside the SOZ to rates of coupled ripples in non-SOZ was greater than the ratio of rates of ripples on spikes detected irrespective of coupling (p<0.05). Coupling strength correlated with an increase in mean normalized ripple amplitude (p<0.01), and a decrease in mean ripple spectral frequency (p<0.05). Significance Generation of low-frequency (80–150 Hz) pRipples in the SOZ involves coupling between epileptiform spike phase and ripple amplitude. The changes in excitability reflected as epileptiform spikes may also cause clusters of pathologically interconnected bursting neurons to grow and synchronize into aberrantly large neuronal assemblies. PMID:27723936
2013-01-01
BACKGROUND Little information is available about the relationship of socioeconomic status (SES) to blunted nocturnal ambulatory blood pressure (ABP) dipping among Hispanics and whether this relationship differs by race. We sought to characterize ABP nondipping and its determinants in a sample of Hispanics. METHODS We enrolled 180 Hispanic participants not on antihypertensive medications. SES was defined by years of educational attainment. All participants underwent 24-hour ABP monitoring. A decrease of <10% in the ratio between average awake and average asleep systolic BP was considered nondipping. RESULTS The mean age of the cohort was 67.1 ± 8.7, mean educational level was 9.4 ± 4.4 years, and 58.9% of the cohort was female. The cohort was comprised of 78.3% Caribbean Hispanics with the rest from Mexico and Central/South America; 41.4% self-identified as white Hispanic, 34.4% self-identified as black Hispanic, and 24.4% did not racially self- identify. The percentage of nondippers was 57.8%. Educational attainment (10.5 years vs. 8.6 years; P <0.01) was significantly higher among dippers than nondippers. In multivariable analyses, each 1-year increase in education was associated with a 9% reduction in the likelihood of being a nondipper (odds ratio [OR], 0.91; 95% confidence interval [CI], 0.84–0.98; P = 0.01). There were significantly greater odds of being a nondipper for black Hispanics than for white Hispanics (OR, 2.83, 95% CI, 1.29–6.23; P = 0.005). Higher SES was significantly protective of nondipping in white Hispanics but not black Hispanics. CONCLUSIONS These results document a substantial prevalence of nondipping in a cohort of predominantly normotensive Hispanics. Dipping status varied significantly by race. Lower SES is significantly associated with nondipping status, and race potentially impacts on this relation. PMID:23547037
Sabour, Siamak
2018-03-08
The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.
Magnani, Robert; Riono, Pandu; Nurhayati; Saputro, Eko; Mustikawati, Dyah; Anartati, Atiek; Prabawanti, Ciptasari; Majid, Nurholis; Morineau, Guy
2010-10-01
To assess the HIV/AIDS epidemic situation among female sex workers (FSW) in Indonesia using data from the 2007 Integrated Biological-Behavioural Surveillance (IBBS). Behavioural data were collected from time-location samples of 5947 FSW in 10 cities in late 2007. HIV, syphilis, gonorrhoea and chlamydia test results were obtained for 4396, 4324, 3291 and 3316 FSW, respectively. Trends in HIV prevalence were assessed via linkage with sentinel surveillance data. Factors associated with HIV, gonorrhoea and chlamydia infection were assessed using multivariable logistic regression. HIV prevalence averaged 10.5% among direct and 4.9% among indirect FSW, and had increased steadily among direct FSW from 2002 to 2007. Prevalence of chlamydia, gonorrhoea and active syphilis averaged 35.6%, 31.8% and 7.3%, respectively, among direct FSW, and 28.7%, 14.3% and 3.5% among indirect FSW. Being a direct FSW, younger age and having current infection with syphilis and gonorrhoea and/or chlamydia were associated with a higher likelihood of HIV infection. Number of clients in the past week and consumption of alcohol before having sex were associated with a higher likelihood of gonorrhoea and/or chlamydia infection, while having received a STI clinic check-up in the previous 3 months and/or periodic presumptive treatment for sexually transmitted infections (STIs) in the past 6 months were associated with reduced likelihood of infection. The HIV/AIDS epidemic among FSW in Indonesia appears to be expanding, albeit unevenly across provinces and types of FSW. High STI prevalence is conducive to further expansion, but recent efforts to strengthen STI control appear promising.
Contributing factors to vehicle to vehicle crash frequency and severity under rainfall.
Jung, Soyoung; Jang, Kitae; Yoon, Yoonjin; Kang, Sanghyeok
2014-09-01
This study combined vehicle to vehicle crash frequency and severity estimations to examine factor impacts on Wisconsin highway safety in rainy weather. Because of data deficiency, the real-time water film depth, the car-following distance, and the vertical curve grade were estimated with available data sources and a GIS analysis to capture rainy weather conditions at the crash location and time. Using a negative binomial regression for crash frequency estimation, the average annual daily traffic per lane, the interaction between the posted speed limit change and the existence of an off-ramp, and the interaction between the travel lane number change and the pavement surface material change were found to increase the likelihood of vehicle to vehicle crashes under rainfall. However, more average daily rainfall per month and a wider left shoulder were identified as factors that decrease the likelihood of vehicle to vehicle crashes. In the crash severity estimation using the multinomial logit model that outperformed the ordered logit model, the travel lane number, the interaction between the travel lane number and the slow grade, the deep water film, and the rear-end collision type were more likely to increase the likelihood of injury crashes under rainfall compared with crashes involving only property damage. As an exploratory data analysis, this study provides insight into potential strategies for rainy weather highway safety improvement, specifically, the following weather-sensitive strategies: road design and ITS implementation for drivers' safety awareness under rainfall. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Validation of rapid suicidality screening in epilepsy using the NDDIE.
Mula, Marco; McGonigal, Aileen; Micoulaud-Franchi, Jean-Arthur; May, Theodor W; Labudda, Kirsten; Brandt, Christian
2016-06-01
Standard mortality ratio for suicide in patients with epilepsy is three times higher than in the general population, and such a risk remains high even after adjusting for clinical and socioeconomic factors. It is thus important to have suitable screening instruments and to implement care pathways for suicide prevention in every epilepsy center. The aim of this study is to validate the use of the Neurological Disorder Depression Inventory for Epilepsy (NDDIE) as a suicidality-screening instrument. The study sample included adult patients with epilepsy assessed with the Mini International Neuropsychiatric Interview (MINI) and the NDDIE. A high suicidality risk according to the Suicidality Module of the MINI was considered the gold standard. Receiver operating characteristic analyses for NDDIE total and individual item scores were computed and subsequently compared using a nonparametric approach. The best possible cutoff was identified with the highest Youden index (J). Likelihood ratios were then computed, and specificity, sensitivity, positive, and negative predictive values calculated. The study sample consisted of 380 adult patients with epilepsy: 46.3% male; mean age was 39.4 ± 14.6; 76.7% had a diagnosis of focal epilepsy; mean age at onset of the epilepsy was 23.3 ± 17.5. According to the MINI, 74 patients (19.5%) fulfilled criteria for a major depressive episode and 19 (5%) presented a high suicidality risk. A score >2 (J = 0.751) for item 4 "I'd be better off dead" of the NDDIE displayed excellent psychometric properties with a good to excellent validity (area under the curve [AUC] 0.906; 95% confidence interval [CI] 0.820-0.992; p < 0.001), sensitivity 84.21% (95% CI 60.4-96.6), specificity 90.86% (95% CI 87.4-93.6), likelihood ratio+ 9.21 (95% CI 6.3-13.5), likelihood ratio- 0.17 (95% CI 0.06-0.50). Item 4 of the NDDIE has shown to be an excellent suicidality screening instrument allowing the development of further care pathways for suicide prevention in epilepsy centers. Wiley Periodicals, Inc. © 2016 International League Against Epilepsy.
The Association between Peritoneal Dialysis Modality and Peritonitis
Johnson, David W.; McDonald, Stephen P.; Boudville, Neil; Borlace, Monique; Badve, Sunil V.; Sud, Kamal; Clayton, Philip A.
2014-01-01
Background and objectives There is conflicting evidence comparing peritonitis rates among patients treated with continuous ambulatory peritoneal dialysis (CAPD) or automated peritoneal dialysis (APD). This study aims to clarify the relationship between peritoneal dialysis (PD) modality (APD versus CAPD) and the risk of developing PD-associated peritonitis. Design, setting, participants, & measurements This study examined the association between PD modality (APD versus CAPD) and the risks, microbiology, and clinical outcomes of PD-associated peritonitis in 6959 incident Australian PD patients between October 1, 2003, and December 31, 2011, using data from the Australia and New Zealand Dialysis and Transplant Registry. Median follow-up time was 1.9 years. Results Patients receiving APD were younger (60 versus 64 years) and had fewer comorbidities. There was no association between PD modality and time to first peritonitis episode (adjusted hazard ratio [HR] for APD versus CAPD, 0.98; 95% confidence interval [95% CI], 0.91 to 1.07; P=0.71). However, there was a lower hazard of developing Gram-positive peritonitis with APD than CAPD, which reached borderline significance (HR, 0.90; 95% CI, 0.80 to 1.00; P=0.05). No statistically significant difference was found in the risk of hospitalizations (odds ratio, 1.12; 95% CI, 0.93 to 1.35; P=0.22), but there was a nonsignificant higher likelihood of 30-day mortality (odds ratio, 1.33; 95% CI, 0.93 to 1.88; P=0.11) at the time of the first episode of peritonitis for patients receiving APD. For all peritonitis episodes (including subsequent episodes of peritonitis), APD was associated with lower rates of culture-negative peritonitis (incidence rate ratio [IRR], 0.81; 95% CI, 0.69 to 0.94; P=0.002) and higher rates of gram-negative peritonitis (IRR, 1.28; 95% CI, 1.13 to 1.46; P=0.01). Conclusions PD modality was not associated with a higher likelihood of developing peritonitis. However, APD was associated with a borderline reduction in the likelihood of a first episode of Gram-positive peritonitis compared with CAPD, and with lower rates of culture-negative peritonitis and higher rates of Gram-negative peritonitis. Peritonitis outcomes were comparable between both modalities. PMID:24626434
Effect of caffeine on cycling time-trial performance in the heat.
Pitchford, Nathan W; Fell, James W; Leveritt, Michael D; Desbrow, Ben; Shing, Cecilia M
2014-07-01
The purpose of this investigation was to determine whether a moderate dose of caffeine would improve a laboratory simulated cycling time-trial in the heat. Nine well-trained male subjects (VO2max 64.4±6.8mLmin(-1)kg(-1), peak power output 378±40W) completed one familiarisation and two experimental laboratory simulated cycling time-trials in environmental conditions of 35°C and 25% RH 90min after consuming either caffeine (3mgkg(-1) BW) or placebo, in a double blind, cross-over study. Time-trial performance was faster in the caffeine trial compared with the placebo trial (mean±SD, 3806±359s versus 4079±333s, p=0.06, 90%CI 42-500s, 86% likelihood of benefit, d=-0.79). Caffeine ingestion was associated with small to moderate increases in average heart rate (p=0.178, d=0.39), VO2 (p=0.154, d=0.45), respiratory exchange ratio (p=0.292, d=0.35) and core temperature (p=0.616, d=0.22) when compared to placebo, however, these were not statistically significant. Average RPE during the caffeine supplemented time-trial was not significantly different from placebo (p=0.41, d=-0.13). Caffeine supplementation at 3mgkg(-1) BW resulted in a worthwhile improvement in cycling time-trial performance in the heat. Double-blind cross-over study. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Wakefield, Melanie; Terry-McElrath, Yvonne; Emery, Sherry; Saffer, Henry; Chaloupka, Frank J.; Szczypka, Glen; Flay, Brian; O’Malley, Patrick M.; Johnston, Lloyd D.
2006-01-01
Objective. To relate exposure to televised youth smoking prevention advertising to youths’ smoking beliefs, intentions, and behaviors. Methods. We obtained commercial television ratings data from 75 US media markets to determine the average youth exposure to tobacco company youth-targeted and parent-targeted smoking prevention advertising. We merged these data with nationally representative school-based survey data (n = 103 172) gathered from 1999 to 2002. Multivariate regression models controlled for individual, geographic, and tobacco policy factors, and other televised antitobacco advertising. Results. There was little relation between exposure to tobacco company–sponsored, youth-targeted advertising and youth smoking outcomes. Among youths in grades 10 and 12, during the 4 months leading up to survey administration, each additional viewing of a tobacco company parent-targeted advertisement was, on average, associated with lower perceived harm of smoking (odds ratio [OR]=0.93; confidence interval [CI]=0.88, 0.98), stronger approval of smoking (OR=1.11; CI=1.03,1.20), stronger intentions to smoke in the future (OR=1.12; CI=1.04,1.21), and greater likelihood of having smoked in the past 30 days (OR=1.12; CI=1.04,1.19). Conclusions. Exposure to tobacco company youth-targeted smoking prevention advertising generally had no beneficial outcomes for youths. Exposure to tobacco company parent-targeted advertising may have harmful effects on youth, especially among youths in grades 10 and 12. PMID:17077405
Misra, Sudip; Oommen, B John; Yanamandra, Sreekeerthy; Obaidat, Mohammad S
2010-02-01
In this paper, we present a learning-automata-like The reason why the mechanism is not a pure LA, but rather why it yet mimics one, will be clarified in the body of this paper. (LAL) mechanism for congestion avoidance in wired networks. Our algorithm, named as LAL Random Early Detection (LALRED), is founded on the principles of the operations of existing RED congestion-avoidance mechanisms, augmented with a LAL philosophy. The primary objective of LALRED is to optimize the value of the average size of the queue used for congestion avoidance and to consequently reduce the total loss of packets at the queue. We attempt to achieve this by stationing a LAL algorithm at the gateways and by discretizing the probabilities of the corresponding actions of the congestion-avoidance algorithm. At every time instant, the LAL scheme, in turn, chooses the action that possesses the maximal ratio between the number of times the chosen action is rewarded and the number of times that it has been chosen. In LALRED, we simultaneously increase the likelihood of the scheme converging to the action, which minimizes the number of packet drops at the gateway. Our approach helps to improve the performance of congestion avoidance by adaptively minimizing the queue-loss rate and the average queue size. Simulation results obtained using NS2 establish the improved performance of LALRED over the traditional RED methods which were chosen as the benchmarks for performance comparison purposes.
Rossi, Maria C E; Nicolucci, Antonio; Pellegrini, Fabio; Comaschi, Marco; Ceriello, Antonio; Cucinotta, Domenico; Giorda, Carlo; Valentini, Umberto; Vespasiani, Giacomo; De Cosmo, Salvatore
2008-04-01
We evaluated to what extent the presence of risk factors and their interactions increased the likelihood of microalbuminuria (MAU) among individuals with type 2 diabetes. Fifty-five Italian diabetes outpatient clinics enrolled a sample of patients with type 2 diabetes, without urinary infections and overt diabetic nephropathy. A morning spot urine sample was collected to centrally determine the urinary albumin/creatinine ratio (ACR). A tree-based regression technique (RECPAM) and multivariate analyses were performed to investigate interaction between correlates of MAU. Of the 1841 patients recruited, 228 (12.4%) were excluded due to the presence of urinary infections and 56 (3.5%) for the presence of macroalbuminuria. Overall, the prevalence of MAU (ACR = 30-299 mg/g) was of 19.1%. The RECPAM algorithm led to the identification of seven classes showing a marked difference in the likelihood of MAU. Non-smoker patients with HbA1c <7% and waist circumference =102 cm showed the lowest prevalence of MAU (7.5%), and represented the reference class. Patients with retinopathy, waist circumference >98 cm and HbA1c >8% showed the highest likelihood of MAU (odds ratio = 13.7; 95% confidence intervals 6.8-27.6). In the other classes identified, the risk of MAU ranged between 3 and 5. Age, systolic blood pressure, HDL cholesterol levels and diabetes treatment represented additional, global correlates of MAU. The likelihood of MAU is strongly related to the interaction between diabetes severity, smoking habits and several components of the metabolic syndrome. In particular, abdominal obesity, elevated blood pressure levels and low HDL cholesterol levels substantially increase the risk of MAU. It is of primary importance to monitor MAU in high-risk individuals and aggressively intervene on modifiable risk factors.
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Proctor, D.
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Normal versus Noncentral Chi-Square Asymptotics of Misspecified Models
ERIC Educational Resources Information Center
Chun, So Yeon; Shapiro, Alexander
2009-01-01
The noncentral chi-square approximation of the distribution of the likelihood ratio (LR) test statistic is a critical part of the methodology in structural equation modeling. Recently, it was argued by some authors that in certain situations normal distributions may give a better approximation of the distribution of the LR test statistic. The main…
Sharing the Diagnostic Process in the Clinical Teaching Environment: A Case Study
ERIC Educational Resources Information Center
Cuello-Garcia; Carlos
2005-01-01
Revealing or visualizing the thinking involved in making clinical decisions is a challenge. A case study is presented with a visual implement for sharing the diagnostic process. This technique adapts the Bayesian approach to the case presentation. Pretest probabilities and likelihood ratios are gathered to obtain post-test probabilities of every…
2008-04-23
In general, the positive predictive value of screening questionnaires is quite poor when disease prevalence is modest/rare, as in the ex- ample of...of sensitivity, specificity, likelihood ratios and predictive values with disease prevalence . Stat Med 1997;16:981–91. 26. Wolfe J, Erickson DJ
Testing Measurement Invariance Using MIMIC: Likelihood Ratio Test with a Critical Value Adjustment
ERIC Educational Resources Information Center
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun
2012-01-01
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance
ERIC Educational Resources Information Center
Meade, Adam W.; Bauer, Daniel J.
2007-01-01
This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
IRT Model Selection Methods for Dichotomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.
2007-01-01
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
Detection of Item Preknowledge Using Likelihood Ratio Test and Score Test
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
An increasing concern of producers of educational assessments is fraudulent behavior during the assessment (van der Linden, 2009). Benefiting from item preknowledge (e.g., Eckerly, 2017; McLeod, Lewis, & Thissen, 2003) is one type of fraudulent behavior. This article suggests two new test statistics for detecting individuals who may have…
Koo, Hoon Jung; Han, Doug Hyun; Park, Sung-Yong
2017-01-01
Objective This study aimed to develop and validate a Structured Clinical Interview for Internet Gaming Disorder (SCI-IGD) in adolescents. Methods First, we generated preliminary items of the SCI-IGD based on the information from the DSM-5 literature reviews and expert consultations. Next, a total of 236 adolescents, from both community and clinical settings, were recruited to evaluate the psychometric properties of the SCI-IGD. Results First, the SCI-IGD was found to be consistent over the time period of about one month. Second, diagnostic concordances between the SCI-IGD and clinician's diagnostic impression were good to excellent. The Likelihood Ratio Positive and the Likelihood Ratio Negative estimates for the diagnosis of SCI-IGD were 10.93 and 0.35, respectively, indicating that SCI-IGD was ‘very useful test’ for identifying the presence of IGD and ‘useful test’ for identifying the absence of IGD. Third, SCI-IGD could identify disordered gamers from non-disordered gamers. Conclusion The implications and limitations of the study are also discussed. PMID:28096871
Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed
2017-01-01
Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.
Falszewska, Anna; Dziechciarz, Piotr; Szajewska, Hania
2014-10-01
To systematically update diagnostic accuracy of the Clinical Dehydration Scale (CDS) in clinical recognition of dehydration in children with acute gastroenteritis. Six databases were searched for diagnostic accuracy studies in which population were children aged 1 to 36 months with acute gastroenteritis; index test was the CDS; and reference test was post-illness weight gain. Three studies involving 360 children were included. Limited evidence showed that in high-income countries the CDS provides strong diagnostic accuracy for ruling in moderate and severe (>6%) dehydration (positive likelihood ratio 5.2-6.6), but has limited value for ruling it out (negative likelihood ratio 0.4-0.55). In low-income countries, the CDS has limited value either for ruling moderate or severe dehydration in or out. In both settings, the CDS had limited value for ruling in or out dehydration <3% or dehydration 3% to 6%. The CDS can help assess moderate to severe dehydration in high-income settings. Given the limited data, the evidence should be viewed with caution. © The Author(s) 2014.
Ou, Lu; Chow, Sy-Miin; Ji, Linying; Molenaar, Peter C M
2017-01-01
The autoregressive latent trajectory (ALT) model synthesizes the autoregressive model and the latent growth curve model. The ALT model is flexible enough to produce a variety of discrepant model-implied change trajectories. While some researchers consider this a virtue, others have cautioned that this may confound interpretations of the model's parameters. In this article, we show that some-but not all-of these interpretational difficulties may be clarified mathematically and tested explicitly via likelihood ratio tests (LRTs) imposed on the initial conditions of the model. We show analytically the nested relations among three variants of the ALT model and the constraints needed to establish equivalences. A Monte Carlo simulation study indicated that LRTs, particularly when used in combination with information criterion measures, can allow researchers to test targeted hypotheses about the functional forms of the change process under study. We further demonstrate when and how such tests may justifiably be used to facilitate our understanding of the underlying process of change using a subsample (N = 3,995) of longitudinal family income data from the National Longitudinal Survey of Youth.
Calvert, Eric; Chambers, Gordon Keith; Regan, William; Hawkins, Robert H; Leith, Jordan M
2009-05-01
The diagnosis of a superior labrum anterior posterior (SLAP) lesion through physical examination has been widely reported in the literature. Most of these studies report high sensitivities and specificities, and claim to be accurate, valid, and reliable. The purpose of this study was to critically evaluate these studies to determine if there was sufficient evidence to support the use of the SLAP physical examination tests as valid and reliable diagnostic test procedures. Strict epidemiologic methodology was used to obtain and collate all relevant articles. Sackett's guidelines were applied to all articles. Confidence intervals and likelihood ratios were determined. Fifteen of 29 relevant studies met the criteria for inclusion. Only one article met all of Sackett's critical appraisal criteria. Confidence intervals for both the positive and negative likelihood ratios contained the value 1. The current literature being used as a resource for teaching in medical schools and continuing education lacks the validity necessary to be useful. There are no good physical examination tests that exist for effectively diagnosing a SLAP lesion.
Revisiting informal payments in 29 transitional countries: The scale and socio-economic correlates.
Habibov, Nazim; Cheung, Alex
2017-04-01
This study assesses informal payments (IPs) in 29 transitional countries using a fully comparable household survey. The countries of the former Soviet Union, especially those in the Caucasus and Central Asia, exhibit the highest scale of IPs, followed by Southern Europe, and then Eastern Europe. The lowest and the highest scale of IPs were in Slovenia (2.7%) and Azerbaijan (73.9%) respectively. We found that being from a wealthier household, experiencing lower quality of healthcare in the form of long waiting times, lack of medicines, absence of personnel, and disrespectful treatment, and having relatives to help when needed, are associated with a higher odds ratio of IPs. Conversely, working for the government is associated with a lower odds ratio of IPs. Living in the countries of the former Soviet Union and in Mongolia is associated with the highest likelihood of IPs, and this is followed by the countries of the Southern Europe. In contrast, living in the countries of Eastern Europe is associated with the lowest likelihood of IPs. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Functional decline in the elderly with MCI: Cultural adaptation of the ADCS-ADL scale.
Cintra, Fabiana Carla Matos da Cunha; Cintra, Marco Túlio Gualberto; Nicolato, Rodrigo; Bertola, Laiss; Ávila, Rafaela Teixeira; Malloy-Diniz, Leandro Fernandes; Moraes, Edgar Nunes; Bicalho, Maria Aparecida Camargos
2017-07-01
Translate, transcultural adaptation and application to Brazilian Portuguese of the Alzheimer's Disease Cooperative Study - Activities of Daily Living (ADCS-ADL) scale as a cognitive screening instrument. We applied the back translation added with pretest and bilingual methods. The sample was composed by 95 elderly individuals and their caregivers. Thirty-two (32) participants were diagnosed as mild cognitive impairment (MCI) patients, 33 as Alzheimer's disease (AD) patients and 30 were considered as cognitively normal individuals. There were only little changes on the scale. The Cronbach alpha coefficient was 0.89. The scores were 72.9 for control group, followed by MCI (65.1) and by AD (55.9), with a p-value < 0.001. The ROC curve value was 0.89. We considered a cut point of 72 and we observed a sensibility of 86.2%, specificity of 70%, positive predictive value of 86.2%, negative predictive value of 70%, positive likelihood ratio of 2.9 and negative likelihood ratio of 0.2. ADCS-ADL scale presents satisfactory psychometric properties to discriminate between MCI, AD and normal cognition.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Thakur, Jyoti; Pahuja, Sharvan Kumar; Pahuja, Roop
2017-01-01
In 2005, an international pediatric sepsis consensus conference defined systemic inflammatory response syndrome (SIRS) for children <18 years of age, but excluded premature infants. In 2012, Hofer et al. investigated the predictive power of SIRS for term neonates. In this paper, we examined the accuracy of SIRS in predicting sepsis in neonates, irrespective of their gestational age (i.e., pre-term, term, and post-term). We also created two prediction models, named Model A and Model B, using binary logistic regression. Both models performed better than SIRS. We also developed an android application so that physicians can easily use Model A and Model B in real-world scenarios. The sensitivity, specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) in cases of SIRS were 16.15%, 95.53%, 3.61, and 0.88, respectively, whereas they were 29.17%, 97.82%, 13.36, and 0.72, respectively, in the case of Model A, and 31.25%, 97.30%, 11.56, and 0.71, respectively, in the case of Model B. All models were significant with p < 0.001. PMID:29257099
Jacob, Laurent; Combes, Florence; Burger, Thomas
2018-06-18
We propose a new hypothesis test for the differential abundance of proteins in mass-spectrometry based relative quantification. An important feature of this type of high-throughput analyses is that it involves an enzymatic digestion of the sample proteins into peptides prior to identification and quantification. Due to numerous homology sequences, different proteins can lead to peptides with identical amino acid chains, so that their parent protein is ambiguous. These so-called shared peptides make the protein-level statistical analysis a challenge and are often not accounted for. In this article, we use a linear model describing peptide-protein relationships to build a likelihood ratio test of differential abundance for proteins. We show that the likelihood ratio statistic can be computed in linear time with the number of peptides. We also provide the asymptotic null distribution of a regularized version of our statistic. Experiments on both real and simulated datasets show that our procedures outperforms state-of-the-art methods. The procedures are available via the pepa.test function of the DAPAR Bioconductor R package.
Zeng, Chan; Newcomer, Sophia R; Glanz, Jason M; Shoup, Jo Ann; Daley, Matthew F; Hambidge, Simon J; Xu, Stanley
2013-12-15
The self-controlled case series (SCCS) method is often used to examine the temporal association between vaccination and adverse events using only data from patients who experienced such events. Conditional Poisson regression models are used to estimate incidence rate ratios, and these models perform well with large or medium-sized case samples. However, in some vaccine safety studies, the adverse events studied are rare and the maximum likelihood estimates may be biased. Several bias correction methods have been examined in case-control studies using conditional logistic regression, but none of these methods have been evaluated in studies using the SCCS design. In this study, we used simulations to evaluate 2 bias correction approaches-the Firth penalized maximum likelihood method and Cordeiro and McCullagh's bias reduction after maximum likelihood estimation-with small sample sizes in studies using the SCCS design. The simulations showed that the bias under the SCCS design with a small number of cases can be large and is also sensitive to a short risk period. The Firth correction method provides finite and less biased estimates than the maximum likelihood method and Cordeiro and McCullagh's method. However, limitations still exist when the risk period in the SCCS design is short relative to the entire observation period.
Comparison of two weighted integration models for the cueing task: linear and likelihood
NASA Technical Reports Server (NTRS)
Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.
2003-01-01
In a task in which the observer must detect a signal at two locations, presenting a precue that predicts the location of a signal leads to improved performance with a valid cue (signal location matches the cue), compared to an invalid cue (signal location does not match the cue). The cue validity effect has often been explained with a limited capacity attentional mechanism improving the perceptual quality at the cued location. Alternatively, the cueing effect can also be explained by unlimited capacity models that assume a weighted combination of noisy responses across the two locations. We compare two weighted integration models, a linear model and a sum of weighted likelihoods model based on a Bayesian observer. While qualitatively these models are similar, quantitatively they predict different cue validity effects as the signal-to-noise ratios (SNR) increase. To test these models, 3 observers performed in a cued discrimination task of Gaussian targets with an 80% valid precue across a broad range of SNR's. Analysis of a limited capacity attentional switching model was also included and rejected. The sum of weighted likelihoods model best described the psychophysical results, suggesting that human observers approximate a weighted combination of likelihoods, and not a weighted linear combination.
Prevalence, Risk Factors and Consequent Effect of Dystocia in Holstein Dairy Cows in Iran
Atashi, Hadi; Abdolmohammadi, Alireza; Dadpasand, Mohammad; Asaadi, Anise
2012-01-01
The objective of this research was to determine the prevalence, risk factors and consequent effect of dystocia on lactation performance in Holstein dairy cows in Iran. The data set consisted of 55,577 calving records on 30,879 Holstein cows in 30 dairy herds for the period March 2000 to April 2009. Factors affecting dystocia were analyzed using multivariable logistic regression models through the maximum likelihood method in the GENMOD procedure. The effect of dystocia on lactation performance and factors affecting calf birth weight were analyzed using mixed linear model in the MIXED procedure. The average incidence of dystocia was 10.8% and the mean (SD) calf birth weight was 42.13 (5.42) kg. Primiparous cows had calves with lower body weight and were more likely to require assistance at parturition (p<0.05). Female calves had lower body weight, and had a lower odds ratio for dystocia than male calves (p<0.05). Twins had lower birth weight, and had a higher odds ratio for dystocia than singletons (p<0.05). Cows which gave birth to a calf with higher weight at birth experienced more calving difficulty (OR (95% CI) = 1.1(1.08–1.11). Total 305-d milk, fat and protein yield was 135 (23), 3.16 (0.80) and 6.52 (1.01) kg less, in cows that experienced dystocia at calving compared with those that did not (p<0.05). PMID:25049584
Compensation for use of monthly-averaged winds in numerical modeling
NASA Technical Reports Server (NTRS)
Parkinson, C. L.
1981-01-01
Ratios R of the monthly averaged wind speeds to the magnitudes of the monthly averaged wind vectors are presented over a 41 x 41 grid covering the southern Ocean and the Antarctic continent. The ratio is found to vary from 1 to over 1000, with an average value of 1.86. These ratios R are relevant for converting from sensible and latent heats calculated with mean monthly data to those calculated with 12 hourly data. The corresponding ratios alpha for wind stress, along with the angle deviations involved, are also presented over the same 41 x 41 grid. The values of alpha generally exceed those for R and average 2.66. Regions in zones of variable wind directions have larger R and alpha ratios, over the ice-covered portions of the southern Ocean averaging 2.74 and 4.35 for R and alpha respectively. Thus adjustments to compensate for the use of mean monthly wind velocities should be stronger for wind stress than for turbulent heats and stronger over ice covered regions than over regions with more persistent wind directions, e.g., those in the belt of mid-latitude westerlies.
Odds Ratio Product of Sleep EEG as a Continuous Measure of Sleep State
Younes, Magdy; Ostrowski, Michele; Soiferman, Marc; Younes, Henry; Younes, Mark; Raneri, Jill; Hanly, Patrick
2015-01-01
Study Objectives: To develop and validate an algorithm that provides a continuous estimate of sleep depth from the electroencephalogram (EEG). Design: Retrospective analysis of polysomnograms. Setting: Research laboratory. Participants: 114 patients who underwent clinical polysomnography in sleep centers at the University of Manitoba (n = 58) and the University of Calgary (n = 56). Interventions: None. Measurements and Results: Power spectrum of EEG was determined in 3-second epochs and divided into delta, theta, alpha-sigma, and beta frequency bands. The range of powers in each band was divided into 10 aliquots. EEG patterns were assigned a 4-digit number that reflects the relative power in the 4 frequency ranges (10,000 possible patterns). Probability of each pattern occurring in 30-s epochs staged awake was determined, resulting in a continuous probability value from 0% to 100%. This was divided by 40 (% of epochs staged awake) producing the odds ratio product (ORP), with a range of 0–2.5. In validation testing, average ORP decreased progressively as EEG progressed from wakefulness (2.19 ± 0.29) to stage N3 (0.13 ± 0.05). ORP < 1.0 predicted sleep and ORP > 2.0 predicted wakefulness in > 95% of 30-s epochs. Epochs with intermediate ORP occurred in unstable sleep with a high arousal index (> 70/h) and were subject to much interrater scoring variability. There was an excellent correlation (r2 = 0.98) between ORP in current 30-s epochs and the likelihood of arousal or awakening occurring in the next 30-s epoch. Conclusions: Our results support the use of the odds ratio product (ORP) as a continuous measure of sleep depth. Citation: Younes M, Ostrowski M, Soiferman M, Younes H, Younes M, Raneri J, Hanly P. Odds ratio product of sleep EEG as a continuous measure of sleep state. SLEEP 2015;38(4):641–654. PMID:25348125
Methods for fitting a parametric probability distribution to most probable number data.
Williams, Michael S; Ebel, Eric D
2012-07-02
Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two data sets that represent Salmonella and Campylobacter concentrations on chicken carcasses. The results demonstrate a bias in the maximum likelihood estimator that increases with reductions in average concentration. The Bayesian method provided unbiased estimates of the concentration distribution parameters for all data sets. We provide computer code for the Bayesian fitting method. Published by Elsevier B.V.
Top pair production in the dilepton decay channel with a tau lepton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Corbo, Matteo
2012-09-19
The top quark pair production and decay into leptons with at least one being a τ lepton is studied in the framework of the CDF experiment at the Tevatron proton antiproton collider at Fermilab (USA). The selection requires an electron or a muon produced either by the τ lepton decay or by a W decay. The analysis uses the complete Run II data set i.e. 9.0 fb -1, selected by one trigger based on a low transverse momentum electron or muon plus one isolated charged track. The top quark pair production cross section at 1.96 TeV is measured at 8.2more » ± 1.7 +1.2 -1.1 ± 0.5 pb, and the top branching ratio into τ lepton is measured at 0.120 ± 0.027 +0.022 -0.019 ± 0.007 with statistical, systematics and luminosity uncertainties. These are up to date the most accurate results in this top decay channel and are in good agreement with the results obtained using other decay channels of the top at the Tevatron. The branching ratio is also measured separating the single lepton from the two leptons events with a log likelihood method. This is the first time these two signatures are separately identified. With a fit to data along the log-likelihood variable an alternative measurement of the branching ratio is made: 0.098 ± 0.022(stat:) ± 0.014(syst:); it is in good agreement with the expectations of the Standard Model (with lepton universality) within the experimental uncertainties. The branching ratio is constrained to be less than 0.159 at 95% con dence level. This limit translates into a limit of a top branching ratio into a potential charged Higgs boson.« less
Beattie, Karen A; Macintyre, Norma J; Pierobon, Jessica; Coombs, Jennifer; Horobetz, Diana; Petric, Alexis; Pimm, Mara; Kean, Walter; Larché, Maggie J; Cividino, Alfred
2011-09-01
To evaluate the sensitivity, specificity and reliability of the gait, arms, legs and spine (GALS) examination to detect signs and symptoms of rheumatoid arthritis when used by physiotherapy students and physiotherapists. Two physiotherapy students and two physiotherapists were trained to perform the GALS examination by viewing an instructional DVD and attending a workshop. Two rheumatologists familiar with the GALS examination also participated in the workshop. All healthcare professionals performed the GALS examination on 25 participants with rheumatoid arthritis recruited through a rheumatology practice and 23 participants without any arthritides recruited from a primary care centre. Each participant was assessed by one rheumatologist, one physiotherapist and one physiotherapy student. Abnormalities of gait, arms, legs and spine, including their location and description, were recorded, along with whether or not a diagnosis of rheumatoid arthritis was suspected. Healthcare professionals understood the study's objective to be their agreement on GALS findings and were unaware that half of the participants had rheumatoid arthritis. Sensitivity, specificity and likelihood ratios were calculated to determine the ability of the GALS examination to screen for rheumatoid arthritis. Using rheumatologists' findings on the study day as the standard for comparison, sensitivity and specificity were 71 to 86% and 69 to 93%, respectively. Positive likelihood ratios ranged from 2.74 to 10.18, while negative likelihood ratios ranged from 0.21 to 0.38. The GALS examination may be a useful tool for physiotherapists to rule out rheumatoid arthritis in a direct access setting. Differences in duration and type of experience of each healthcare professional may contribute to the variation in results. The merits of introducing the GALS examination into physiotherapy curricula and practice should be explored. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
De Yébenes, María Jesús García; Otero, Angel; Zunzunegui, María Victoria; Rodríguez-Laso, Angel; Sánchez-Sánchez, Fernando; Del Ser, Teodoro
2003-10-01
To validate the 'Prueba Cognitiva de Leganés' (PCL) as a screening tool for cognitive impairment in elderly people with little formal education. The PCL is a simple cognitive test with 32 items that includes two scores of orientation and memory and a global score of 0-32 points. It was applied to a population sample of 527 elderly people over 70 with low educational level, who were independently diagnosed by consensus between two neurologists as having normal cognitive function, age associated cognitive decline (AACD, IPA-OMS criteria) or dementia (DSM-IV criteria). Individuals with severe visual or hearing defects and those who rejected the exam were excluded from the study. The PCL was validated in a sample of 375 individuals: 300 normal, 42 with AACD and 33 with dementia. The sensitivity, specificity, accuracy and likelihood ratios, as well as the ROC curves for dementia and for AACD-dementia, were calculated. The confounding effect of sociodemographic variables was assessed by logistic regression analysis and convergent validity by partial correlations of the PCL with other cognitive tests. Inter-rater reliability was evaluated with the intraclass correlation coefficient. The PCL identified dementia (cut-off < or =22) and AACD-dementia (cut-off < or =26), with the following diagnostic parameters, respectively: sensitivity 93.9%-80%, specificity 94.7%-84.3%, positive likelihood ratio 17.8-5.1, negative likelihood ratio 0.06-0.24, and accuracy 94.6%-83.4%. The areas under the ROC curve were 0.985 (95% Confidence Intervals (CI) 0.967-0.995) and 0.904 (95% CI: 0.870-0.932) respectively. The intraclass correlation coefficient was 0.79 (0.74-0.83). The PCL is a simple instrument, which is both valid and reliable, for the screening of dementia in population samples of individuals with low educational level. This instrument could be useful in primary health care. Copyright 2003 John Wiley & Sons, Ltd.
Ablordeppey, Enyo A.; Drewry, Anne M.; Beyer, Alexander B.; Theodoro, Daniel L.; Fowler, Susan A.; Fuller, Brian M.; Carpenter, Christopher R.
2016-01-01
Objective We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared to chest radiography. Data Sources PubMed, EMBASE, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov Study Selection Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2×2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, inter-rater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Data Extraction Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Data Synthesis 15 studies with 1553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 [0.77, 0.86] and 0.98 [0.97, 0.99] respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 [14.72, 65.78] and 0.25 [0.13, 0.47]. The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Conclusions Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography. PMID:27922877
NASA Astrophysics Data System (ADS)
Rahardjo, K. D.; Dharmaeizar; Nainggolan, G.; Harimurti, K.
2017-08-01
Research has shown that hemodialysis patients with lung congestion have high morbidity and mortality. Patients were assumed to be free of lung congestion if they had reached their post-dialysis dry weight. Most often, to determine if the patient was free of lung congestion, physical examination was used. However, the accuracy of physical examination in detecting lung congestion has not been established. To compare the capabilities of physical examination and lung ultrasound in detection of lung congestion, cross-sectional data collection was conducted on hemodialysis patients. Analysis was done to obtain proportion, sensitivity, specificity, positive predictive value, negative predictive value, and positive likelihood ratio. Sixty patients participated in this study. The inter observer variation of 20 patients revealed a kappa value of 0.828. When all 60 patients were taken into account, we found that 36 patients (57.1%) had lung congestion. Mild lung congestion was found in 24 (38.1%), and 12 (19%) had a moderate degree of congestion. In the analysis comparing jugular venous pressure to lung ultrasound, we found that sensitivity was 0.47 (0.31-0.63), specificity was 0.73 (0.54-0.86), positive predictive value (PPV) was 0.51 (0.36-0.67), negative predictive value (NPV) was 0.70 (0.49-0.84), positive likelihood ratio (PLR) was 1.75 (0.88-3.47), and the negative likelihood ratio (NLR) was 0.72 (0.47-1.12). In terms of lung auscultation, we found that sensitivity was 0.56 (0.39-0.71), specificity was 0.54 (0.35-0.71), PPV was 0.61 (0.44-0.76), NPV was 0.48 (0.31-0.66), PLR was 1.21 (0.73-2.0), and NLR was 0.82 (0.49-1.38). The results of our study showed that jugular venous distention and lung auscultation examination are not reliable means of detecting lung congestion.
Fereshtehnejad, Seyed-Mohammad; Montplaisir, Jacques Y; Pelletier, Amelie; Gagnon, Jean-François; Berg, Daniela; Postuma, Ronald B
2017-06-01
Recently, the International Parkinson and Movement Disorder Society introduced the prodromal criteria for PD. Objectives Our study aimed to examine diagnostic accuracy of the criteria as well as the independence of prodromal markers to predict conversion to PD or dementia with Lewy bodies. This prospective cohort study was performed on 121 individuals with rapid eye movement sleep behavior disorder who were followed annually for 1 to 12 years. Using data from a comprehensive panel of prodromal markers, likelihood ratio and post-test probability of the criteria were calculated at baseline and during each follow-up visit. Forty-eight (39.7%) individuals with rapid eye movement sleep behavior disorder converted to PD/dementia with Lewy bodies. The prodromal criteria had 81.3% sensitivity and 67.9% specificity for conversion to PD/dementia with Lewy bodies at 4-year follow-up. One year before conversion, sensitivity was 100%. The criteria predicted dementia with Lewy bodies with even higher accuracy than PD without dementia at onset. Those who met the threshold of prodromal criteria at baseline had significantly more rapid conversion into a neurodegenerative state (4.8 vs. 9.1 years; P < 0.001). Pair-wise combinations of different prodromal markers showed that markers were independent of one another. The prodromal criteria are a promising tool for predicting incidence of PD/dementia with Lewy bodies and conversion time in a rapid eye movement sleep behavior disorder cohort, with high sensitivity and high specificity with long follow-up. Prodromal markers influence the overall likelihood ratio independently, allowing them to be reliably multiplied. Defining additional markers with high likelihood ratio, further studies with longitudinal assessment and testing thresholds in different target populations will improve the criteria. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Décary, Simon; Frémont, Pierre; Pelletier, Bruno; Fallaha, Michel; Belzile, Sylvain; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; Feldman, Debbie; Sylvestre, Marie-Pierre; Vendittoli, Pascal-André; Desmeules, François
2018-04-01
To assess the validity of diagnostic clusters combining history elements and physical examination tests to diagnose or exclude patellofemoral pain (PFP). Prospective diagnostic study. Orthopedic outpatient clinics, family medicine clinics, and community-dwelling. Consecutive patients (N=279) consulting one of the participating orthopedic surgeons (n=3) or sport medicine physicians (n=2) for any knee complaint. Not applicable. History elements and physical examination tests were obtained by a trained physiotherapist blinded to the reference standard: a composite diagnosis including both physical examination tests and imaging results interpretation performed by an expert physician. Penalized logistic regression (least absolute shrinkage and selection operator) was used to identify history elements and physical examination tests associated with the diagnosis of PFP, and recursive partitioning was used to develop diagnostic clusters. Diagnostic accuracy measures including sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios with associated 95% confidence intervals (CIs) were calculated. Two hundred seventy-nine participants were evaluated, and 75 had a diagnosis of PFP (26.9%). Different combinations of history elements and physical examination tests including the age of participants, knee pain location, difficulty descending stairs, patellar facet palpation, and passive knee extension range of motion were associated with a diagnosis of PFP and used in clusters to accurately discriminate between individuals with PFP and individuals without PFP. Two diagnostic clusters developed to confirm the presence of PFP yielded a positive likelihood ratio of 8.7 (95% CI, 5.2-14.6) and 3 clusters to exclude PFP yielded a negative likelihood ratio of .12 (95% CI, .06-.27). Diagnostic clusters combining common history elements and physical examination tests that can accurately diagnose or exclude PFP compared to various knee disorders were developed. External validation is required before clinical use. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Reliability and validity of the new Tanaka B Intelligence Scale scores: a group intelligence test.
Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio
2014-01-01
The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2 ± 0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach's alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. The Cronbach's alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85-0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9-48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03-0.4). Thus, intellectual disability could be ruled out or determined. The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult.
Reliability and Validity of the New Tanaka B Intelligence Scale Scores: A Group Intelligence Test
Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio
2014-01-01
Objective The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. Methods The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2±0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach’s alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. Results The Cronbach’s alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85–0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9–48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03–0.4). Thus, intellectual disability could be ruled out or determined. Conclusion The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult. PMID:24940880
Trainor, Kate; Pinnington, Mark A
2011-03-01
It has been proposed that neurodynamic examination can assist differential diagnosis of upper/mid lumbar nerve root compression; however, the diagnostic validity of many of these tests has yet to be established. This pilot study aimed to establish the diagnostic validity of the slump knee bend neurodynamic test for upper/mid lumbar nerve root compression in subjects with suspected lumbosacral radicular pain. Two independent examiners performed the slump knee bend test on subjects with radicular leg pain. Inter-tester reliability was calculated using the kappa coefficient. Slump knee bend test results were compared with magnetic resonance imaging findings, and diagnostic accuracy measures were calculated including sensitivity, specificity, predictive values and likelihood ratios. Orthopaedic spinal clinic, secondary care. Sixteen patients with radicular leg pain. All four subjects with mid lumbar nerve root compression on magnetic resonance imaging were correctly identified with the slump knee bend test; however, it was falsely positive in two individuals without the condition. Inter-tester reliability for the slump knee bend test using the kappa coefficient was 0.71 (95% confidence interval 0.33 to 1.0). Diagnostic validity calculations for the slump knee bend test (95% confidence intervals) were: sensitivity, 100% (40 to 100%); specificity, 83% (52 to 98%); positive predictive value, 67% (22 to 96%); negative predictive value, 100% (69 to 100%); positive likelihood ratio, 6.0 (1.58 to 19.4); and negative likelihood ratio, 0 (0 to 0.6). Results indicate good inter-tester reliability and suggest that the slump knee bend test has potential to be a useful clinical test for identifying patients with mid lumbar nerve root compression. Further investigation is needed on larger numbers of patients to confirm these findings. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Effect of fuel-air-ratio nonuniformity on emissions of nitrogen oxides
NASA Technical Reports Server (NTRS)
Lyons, V. J.
1981-01-01
The inlet fuel-air ratio nonuniformity is studied to deterine how nitrogen oxide (NOx) emissions are affected. An increase in NOx emissions with increased fuel-air ratio nonuniformity for average equivalence ratios less than 0.7 and a decrease in NOx emissions for average equivalence ratios near stoichiometric is predicted. The degree of uniformityy of fuel-air ratio profiles that is necessary to achieve NOx emissions goals for actual engines that use lean, premixed, prevaporized combustion systems is determined.
Herbst, Meghan K.; Rosenberg, Graeme; Daniels, Brock; Gross, Cary P.; Singh, Dinesh; Molinaro, Annette M.; Luty, Seth; Moore, Christopher L.
2016-01-01
Study objective Hydronephrosis is readily visible on ultrasonography and is a strong predictor of ureteral stones, but ultrasonography is a user-dependent technology and the test characteristics of clinician-performed ultrasonography for hydronephrosis are incompletely characterized, as is the effect of ultrasound fellowship training on predictive accuracy. We seek to determine the test characteristics of ultrasonography for detecting hydronephrosis when performed by clinicians with a wide range of experience under conditions of direct patient care. Methods This was a prospective study of patients presenting to an academic medical center emergency department with suspected renal colic. Before computed tomography (CT) results, an emergency clinician performed bedside ultrasonography, recording the presence and degree of hydronephrosis. CT data were abstracted from the dictated radiology report by an investigator blinded to the bedside ultrasonographic results. Test characteristics of bedside ultrasonography for hydronephrosis were calculated with the CT scan as the reference standard, with test characteristics compared by clinician experience stratified into 4 levels: attending physicians with emergency ultrasound fellowship training, attending physicians without emergency ultrasound fellowship training, ultrasound experienced non–attending physician clinicians (at least 2 weeks of ultrasound training), and ultrasound inexperienced non–attending physician clinicians (physician assistants, nurse practitioners, off-service rotators, and first-year emergency medicine residents with fewer than 2 weeks of ultrasound training). Results There were 670 interpretable bedside ultrasonographic tests performed by 144 unique clinicians, 80.9% of which were performed by clinicians directly involved in the care of the patient. On CT, 47.5% of all subjects had hydronephrosis and 47.0% had a ureteral stone. Among all clinicians, ultrasonography had a sensitivity of 72.6% (95% confidence interval [CI] 65.4% to 78.9%), specificity of 73.3% (95% CI 66.1% to 79.4%), positive likelihood ratio of 2.72 (95% CI 2.25 to 3.27), and negative likelihood ratio of 0.37 (95% CI 0.31 to 0.44) for hydronephrosis, using hydronephrosis on CT as the criterion standard. Among attending physicians with fellowship training, ultrasonography had sensitivity of 92.7% (95% CI 83.8% to 96.9%), positive likelihood ratio of 4.97 (95% CI 2.90 to 8.51), and negative likelihood ratio of 0.08 (95% CI 0.03 to 0.23). Conclusion Overall, ultrasonography performed by emergency clinicians was moderately sensitive and specific for detection of hydronephrosis as seen on CT in patients with suspected renal colic. However, presence or absence of hydronephrosis as determined by emergency physicians with fellowship training in ultrasonography yielded more definitive test results. For clinicians without fellowship training, there was no significant difference between groups in the predictive accuracy of the application according to experience level. PMID:24630203
Herbst, Meghan K; Rosenberg, Graeme; Daniels, Brock; Gross, Cary P; Singh, Dinesh; Molinaro, Annette M; Luty, Seth; Moore, Christopher L
2014-09-01
Hydronephrosis is readily visible on ultrasonography and is a strong predictor of ureteral stones, but ultrasonography is a user-dependent technology and the test characteristics of clinician-performed ultrasonography for hydronephrosis are incompletely characterized, as is the effect of ultrasound fellowship training on predictive accuracy. We seek to determine the test characteristics of ultrasonography for detecting hydronephrosis when performed by clinicians with a wide range of experience under conditions of direct patient care. This was a prospective study of patients presenting to an academic medical center emergency department with suspected renal colic. Before computed tomography (CT) results, an emergency clinician performed bedside ultrasonography, recording the presence and degree of hydronephrosis. CT data were abstracted from the dictated radiology report by an investigator blinded to the bedside ultrasonographic results. Test characteristics of bedside ultrasonography for hydronephrosis were calculated with the CT scan as the reference standard, with test characteristics compared by clinician experience stratified into 4 levels: attending physicians with emergency ultrasound fellowship training, attending physicians without emergency ultrasound fellowship training, ultrasound experienced non-attending physician clinicians (at least 2 weeks of ultrasound training), and ultrasound inexperienced non-attending physician clinicians (physician assistants, nurse practitioners, off-service rotators, and first-year emergency medicine residents with fewer than 2 weeks of ultrasound training). There were 670 interpretable bedside ultrasonographic tests performed by 144 unique clinicians, 80.9% of which were performed by clinicians directly involved in the care of the patient. On CT, 47.5% of all subjects had hydronephrosis and 47.0% had a ureteral stone. Among all clinicians, ultrasonography had a sensitivity of 72.6% (95% confidence interval [CI] 65.4% to 78.9%), specificity of 73.3% (95% CI 66.1% to 79.4%), positive likelihood ratio of 2.72 (95% CI 2.25 to 3.27), and negative likelihood ratio of 0.37 (95% CI 0.31 to 0.44) for hydronephrosis, using hydronephrosis on CT as the criterion standard. Among attending physicians with fellowship training, ultrasonography had sensitivity of 92.7% (95% CI 83.8% to 96.9%), positive likelihood ratio of 4.97 (95% CI 2.90 to 8.51), and negative likelihood ratio of 0.08 (95% CI 0.03 to 0.23). Overall, ultrasonography performed by emergency clinicians was moderately sensitive and specific for detection of hydronephrosis as seen on CT in patients with suspected renal colic. However, presence or absence of hydronephrosis as determined by emergency physicians with fellowship training in ultrasonography yielded more definitive test results. For clinicians without fellowship training, there was no significant difference between groups in the predictive accuracy of the application according to experience level. Copyright © 2014 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Coelho, Luís M; Salluh, Jorge I F; Soares, Márcio; Bozza, Fernando A; Verdeal, Juan Carlos R; Castro-Faria-Neto, Hugo C; Lapa e Silva, José Roberto; Bozza, Patrícia T; Póvoa, Pedro
2012-12-12
Community-acquired pneumonia (CAP) requiring intensive care unit (ICU) admission remains a severe medical condition, presenting ICU mortality rates reaching 30%. The aim of this study was to assess the value of different patterns of C-reactive protein (CRP)-ratio response to antibiotic therapy in patients with severe CAP requiring ICU admission as an early maker of outcome. In total, 191 patients with severe CAP were prospectively included and CRP was sampled every other day from D1 to D7 of antibiotic prescription. CRP-ratio was calculated in relation to D1 CRP concentration. Patients were classified according to an individual pattern of CRP-ratio response with the following criteria: fast response - when D5 CRP was less than or equal to 0.4 of D1 CRP concentration; slow response - when D5 CRP was > 0.4 and D7 less than or equal to 0.8 of D1 CRP concentration; nonresponse - when D7 CRP was > 0.8 of D1 CRP concentration. Comparison between ICU survivors and non-survivors was performed. CRP-ratio from D1 to D7 decreased faster in survivors than in non-survivors (p = 0.01). The ability of CRP-ratio by D5 to predict ICU outcome assessed by the area under the ROC curve was 0.73 (95% Confidence Interval, 0.64 - 0.82). By D5, a CRP concentration above 0.5 of the initial level was a marker of poor outcome (sensitivity 0.81, specificity 0.58, positive likelihood ratio 1.93, negative likelihood ratio 0.33). The time-dependent analysis of CRP-ratio of the three patterns (fast response n = 66; slow response n = 81; nonresponse n = 44) was significantly different between groups (p < 0.001). The ICU mortality rate was considerably different according to the patterns of CRP-ratio response: fast response 4.8%, slow response 17.3% and nonresponse 36.4% (p < 0.001). In severe CAP, sequential evaluation of CRP-ratio was useful in the early identification of patients with poor outcome. The evaluation of CRP-ratio pattern of response to antibiotics during the first week of therapy was useful in the recognition of the individual clinical evolution.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Branching-ratio approximation for the self-exciting Hawkes process
NASA Astrophysics Data System (ADS)
Hardiman, Stephen J.; Bouchaud, Jean-Philippe
2014-12-01
We introduce a model-independent approximation for the branching ratio of Hawkes self-exciting point processes. Our estimator requires knowing only the mean and variance of the event count in a sufficiently large time window, statistics that are readily obtained from empirical data. The method we propose greatly simplifies the estimation of the Hawkes branching ratio, recently proposed as a proxy for market endogeneity and formerly estimated using numerical likelihood maximization. We employ our method to support recent theoretical and experimental results indicating that the best fitting Hawkes model to describe S&P futures price changes is in fact critical (now and in the recent past) in light of the long memory of financial market activity.
Mattos, Jose L; Schlosser, Rodney J; Mace, Jess C; Smith, Timothy L; Soler, Zachary M
2018-05-02
Olfactory-specific quality of life (QOL) can be measured using the Questionnaire of Olfactory Disorders Negative Statements (QOD-NS). Changes in the QOD-NS after treatment can be difficult to interpret since there is no standardized definition of clinically meaningful improvement. Patients with chronic rhinosinusitis (CRS) completed the QOD-NS. Four distribution-based methods were used to calculate the minimal clinically important difference (MCID): (1) one-half standard deviation (SD); (2) standard error of the mean (SEM); (3) Cohen's effect size (d) of the smallest unit of change; and (4) minimal detectable change (MDC). We also averaged all 4 of the scores together. Finally, the likelihood of achieving a MCID after sinus surgery using these methods, as well as average QOD-NS scores, was stratified by normal vs abnormal baseline QOD-NS scores. Outcomes were examined on 128 patients. The mean ± SD improvement in QOD-NS score after surgery was 4.3 ± 11.0 for the entire cohort and 9.6 ± 12.9 for those with abnormal baseline scores (p < 0.001). The MCID values using the different techniques were: (1) SD = 6.5; (2) SEM = 3.1; (3) d = 2.6; and (4) MDC = 8.6. The MCID score was 5.2 on average. For the total cohort analysis, the likelihood of reporting a MCID ranged from 26% to 51%, and 49% to 70% for patients reporting preoperative abnormal olfaction. Distribution-based MCID values of the QOD-NS range between 2.6 and 8.6 points, with an average of 5.2. When stratified by preoperative QOD-NS scores the majority of patients reporting abnormal preoperative QOD-NS scores achieved a MCID. © 2018 ARS-AAOA, LLC.
Safety modeling of urban arterials in Shanghai, China.
Wang, Xuesong; Fan, Tianxiang; Chen, Ming; Deng, Bing; Wu, Bing; Tremont, Paul
2015-10-01
Traffic safety on urban arterials is influenced by several key variables including geometric design features, land use, traffic volume, and travel speeds. This paper is an exploratory study of the relationship of these variables to safety. It uses a comparatively new method of measuring speeds by extracting GPS data from taxis operating on Shanghai's urban network. This GPS derived speed data, hereafter called Floating Car Data (FCD) was used to calculate average speeds during peak and off-peak hours, and was acquired from samples of 15,000+ taxis traveling on 176 segments over 18 major arterials in central Shanghai. Geometric design features of these arterials and surrounding land use characteristics were obtained by field investigation, and crash data was obtained from police reports. Bayesian inference using four different models, Poisson-lognormal (PLN), PLN with Maximum Likelihood priors (PLN-ML), hierarchical PLN (HPLN), and HPLN with Maximum Likelihood priors (HPLN-ML), was used to estimate crash frequencies. Results showed the HPLN-ML models had the best goodness-of-fit and efficiency, and models with ML priors yielded estimates with the lowest standard errors. Crash frequencies increased with increases in traffic volume. Higher average speeds were associated with higher crash frequencies during peak periods, but not during off-peak periods. Several geometric design features including average segment length of arterial, number of lanes, presence of non-motorized lanes, number of access points, and commercial land use, were positively related to crash frequencies. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Franklin, M. Rose (Technical Monitor)
2000-01-01
Since 1750, the number of cataclysmic volcanic eruptions (i.e., those having a volcanic explosivity index, or VEI, equal to 4 or larger) per decade is found to span 2-11, with 96% located in the tropics and extra-tropical Northern Hemisphere, A two-point moving average of the time series has higher values since the 1860s than before, measuring 8.00 in the 1910s (the highest value) and measuring 6.50 in the 1980s, the highest since the 18 1 0s' peak. On the basis of the usual behavior of the first difference of the two-point moving averages, one infers that the two-point moving average for the 1990s will measure about 6.50 +/- 1.00, implying that about 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially, those having VEI equal to 5 or larger) nearly always have been associated with episodes of short-term global cooling, the occurrence of even one could ameliorate the effects of global warming. Poisson probability distributions reveal that the probability of one or more VEI equal to 4 or larger events occurring within the next ten years is >99%, while it is about 49% for VEI equal to 5 or larger events and 18% for VEI equal to 6 or larger events. Hence, the likelihood that a, climatically significant volcanic eruption will occur within the next 10 years appears reasonably high.
Li, Li; Chen, Shi; Wang, Ke; Huang, Jiao; Liu, Li; Wei, Sheng; Gao, Hong-Yu
2015-01-01
Nodal invasion by colorectal cancer is a critical determinant in estimating patient survival and in choosing appropriate preoperative treatment. The present meta-analysis was designed to evaluate the diagnostic value of endorectal ultrasound (EUS) in preoperative assessment of lymph node involvement in colorectal cancer. We systematically searched PubMed, Web of Science, Embase, and China National Knowledge Infrastructure (CNKI) databases for relevant studies published on or before December 10th, 2014. The sensitivity, specificity, likelihood ratios, diagnostic odds ratio (DOR) and area under the summary receiver operating characteristics curve (AUC) were assessed to estimate the diagnostic value of EUS. Subgroup analysis and meta-regression were performed to explore heterogeneity across studies. Thirty-three studies covering 3,016 subjects were included. The pooled sensitivity and specificity were 0.69 (95%CI: 0.63-0.75) and 0.77 (95%CI: 0.73-0.82), respectively. The positive and negative likelihood ratios were 3.09 (95%CI: 2.52-3.78) and 0.39 (95%CI: 0.32-0.48), respectively. The DOR was 7.84 (95%CI: 5.56-11.08), and AUC was 0.80 (95%CI: 0.77-0.84). This meta-analysis indicated that EUS has moderate diagnostic value in preoperative assessment of lymph node involvement in colorectal cancer. Further refinements in technology and diagnostic criteria are necessary to improve the diagnostic accuracy of EUS.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim