Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
The Sequential Probability Ratio Test and Binary Item Response Models
ERIC Educational Resources Information Center
Nydick, Steven W.
2014-01-01
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Distributed Immune Systems for Wireless Network Information Assurance
2010-04-26
ratio test (SPRT), where the goal is to optimize a hypothesis testing problem given a trade-off between the probability of errors and the...using cumulative sum (CUSUM) and Girshik-Rubin-Shiryaev (GRSh) statistics. In sequential versions of the problem the sequential probability ratio ...the more complicated problems, in particular those where no clear mean can be established. We developed algorithms based on the sequential probability
Safeguarding a Lunar Rover with Wald's Sequential Probability Ratio Test
NASA Technical Reports Server (NTRS)
Furlong, Michael; Dille, Michael; Wong, Uland; Nefian, Ara
2016-01-01
The virtual bumper is a safeguarding mechanism for autonomous and remotely operated robots. In this paper we take a new approach to the virtual bumper system by using an old statistical test. By using a modified version of Wald's sequential probability ratio test we demonstrate that we can reduce the number of false positive reported by the virtual bumper, thereby saving valuable mission time. We use the concept of sequential probability ratio to control vehicle speed in the presence of possible obstacles in order to increase certainty about whether or not obstacles are present. Our new algorithm reduces the chances of collision by approximately 98 relative to traditional virtual bumper safeguarding without speed control.
A detailed description of the sequential probability ratio test for 2-IMU FDI
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
The sequential probability ratio test (SPRT) for 2-IMU FDI (inertial measuring unit failure detection/isolation) is described. The SPRT is a statistical technique for detecting and isolating soft IMU failures originally developed for the strapdown inertial reference unit. The flowchart of a subroutine incorporating the 2-IMU SPRT is included.
Two-IMU FDI performance of the sequential probability ratio test during shuttle entry
NASA Technical Reports Server (NTRS)
Rich, T. M.
1976-01-01
Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-11-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator.
Interpretation of diagnostic data: 5. How to do it with simple maths.
1983-01-01
The use of simple maths with the likelihood ratio strategy fits in nicely with our clinical views. By making the most out of the entire range of diagnostic test results (i.e., several levels, each with its own likelihood ratio, rather than a single cut-off point and a single ratio) and by permitting us to keep track of the likelihood that a patient has the target disorder at each point along the diagnostic sequence, this strategy allows us to place patients at an extremely high or an extremely low likelihood of disease. Thus, the numbers of patients with ultimately false-positive results (who suffer the slings of labelling and the arrows of needless therapy) and of those with ultimately false-negative results (who therefore miss their chance for diagnosis and, possibly, efficacious therapy) will be dramatically reduced. The following guidelines will be useful in interpreting signs, symptoms and laboratory tests with the likelihood ratio strategy: Seek out, and demand from the clinical or laboratory experts who ought to know, the likelihood ratios for key symptoms and signs, and several levels (rather than just the positive and negative results) of diagnostic test results. Identify, when feasible, the logical sequence of diagnostic tests. Estimate the pretest probability of disease for the patient, and, using either the nomogram or the conversion formulas, apply the likelihood ratio that corresponds to the first diagnostic test result. While remembering that the resulting post-test probability or odds from the first test becomes the pretest probability or odds for the next diagnostic test, repeat the process for all the pertinent symptoms, signs and laboratory studies that pertain to the target disorder. However, these combinations may not be independent, and convergent diagnostic tests, if treated as independent, will combine to overestimate the final post-test probability of disease. You are now far more sophisticated in interpreting diagnostic tests than most of your teachers. In the last part of our series we will show you some rather complex strategies that combine diagnosis and therapy, quantify our as yet nonquantified ideas about use, and require the use of at least a hand calculator. PMID:6671182
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.
Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert
2015-09-01
The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Using effort information with change-in-ratio data for population estimation
Udevitz, Mark S.; Pollock, Kenneth H.
1995-01-01
Most change-in-ratio (CIR) methods for estimating fish and wildlife population sizes have been based only on assumptions about how encounter probabilities vary among population subclasses. When information on sampling effort is available, it is also possible to derive CIR estimators based on assumptions about how encounter probabilities vary over time. This paper presents a generalization of previous CIR models that allows explicit consideration of a range of assumptions about the variation of encounter probabilities among subclasses and over time. Explicit estimators are derived under this model for specific sets of assumptions about the encounter probabilities. Numerical methods are presented for obtaining estimators under the full range of possible assumptions. Likelihood ratio tests for these assumptions are described. Emphasis is on obtaining estimators based on assumptions about variation of encounter probabilities over time.
Pérez, Omar D; Aitken, Michael R F; Zhukovsky, Peter; Soto, Fabián A; Urcelay, Gonzalo P; Dickinson, Anthony
2016-12-15
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Meta-analysis: accuracy of rapid tests for malaria in travelers returning from endemic areas.
Marx, Arthur; Pewsner, Daniel; Egger, Matthias; Nüesch, Reto; Bucher, Heiner C; Genton, Blaise; Hatz, Christoph; Jüni, Peter
2005-05-17
Microscopic diagnosis of malaria is unreliable outside specialized centers. Rapid tests have become available in recent years, but their accuracy has not been assessed systematically. To determine the accuracy of rapid diagnostic tests for ruling out malaria in nonimmune travelers returning from malaria-endemic areas. The authors searched MEDLINE, EMBASE, CAB Health, and CINAHL (1988 to September 2004); hand-searched conference proceedings; checked reference lists; and contacted experts and manufacturers. Diagnostic accuracy studies in nonimmune individuals with suspected malaria were included if they compared rapid tests with expert microscopic examination or polymerase chain reaction tests. Data on study and patient characteristics and results were extracted in duplicate. The main outcome was the likelihood ratio for a negative test result (negative likelihood ratio) for Plasmodium falciparum malaria. Likelihood ratios were combined by using random-effects meta-analysis, stratified by the antigen targeted (histidine-rich protein-2 [HRP-2] or parasite lactate dehydrogenase [LDH]) and by test generation. Nomograms of post-test probabilities were constructed. The authors included 21 studies and 5747 individuals. For P. falciparum, HRP-2-based tests were more accurate than parasite LDH-based tests: Negative likelihood ratios were 0.08 and 0.13, respectively (P = 0.019 for difference). Three-band HRP-2 tests had similar negative likelihood ratios but higher positive likelihood ratios compared with 2-band tests (34.7 vs. 98.5; P = 0.003). For P. vivax, negative likelihood ratios tended to be closer to 1.0 for HRP-2-based tests than for parasite LDH-based tests (0.24 vs. 0.13; P = 0.22), but analyses were based on a few heterogeneous studies. Negative likelihood ratios for the diagnosis of P. malariae or P. ovale were close to 1.0 for both types of tests. In febrile travelers returning from sub-Saharan Africa, the typical probability of P. falciparum malaria is estimated at 1.1% (95% CI, 0.6% to 1.9%) after a negative 3-band HRP-2 test result and 97% (CI, 92% to 99%) after a positive test result. Few studies evaluated 3-band HRP-2 tests. The evidence is also limited for species other than P. falciparum because of the few available studies and their more heterogeneous results. Further studies are needed to determine whether the use of rapid diagnostic tests improves outcomes in returning travelers with suspected malaria. Rapid malaria tests may be a useful diagnostic adjunct to microscopy in centers without major expertise in tropical medicine. Initial decisions on treatment initiation and choice of antimalarial drugs can be based on travel history and post-test probabilities after rapid testing. Expert microscopy is still required for species identification and confirmation.
The statistics of Pearce element diagrams and the Chayes closure problem
NASA Astrophysics Data System (ADS)
Nicholls, J.
1988-05-01
Pearce element ratios are defined as having a constituent in their denominator that is conserved in a system undergoing change. The presence of a conserved element in the denominator simplifies the statistics of such ratios and renders them subject to statistical tests, especially tests of significance of the correlation coefficient between Pearce element ratios. Pearce element ratio diagrams provide unambigous tests of petrologic hypotheses because they are based on the stoichiometry of rock-forming minerals. There are three ways to recognize a conserved element: 1. The petrologic behavior of the element can be used to select conserved ones. They are usually the incompatible elements. 2. The ratio of two conserved elements will be constant in a comagmatic suite. 3. An element ratio diagram that is not constructed with a conserved element in the denominator will have a trend with a near zero intercept. The last two criteria can be tested statistically. The significance of the slope, intercept and correlation coefficient can be tested by estimating the probability of obtaining the observed values from a random population of arrays. This population of arrays must satisfy two criteria: 1. The population must contain at least one array that has the means and variances of the array of analytical data for the rock suite. 2. Arrays with the means and variances of the data must not be so abundant in the population that nearly every array selected at random has the properties of the data. The population of random closed arrays can be obtained from a population of open arrays whose elements are randomly selected from probability distributions. The means and variances of these probability distributions are themselves selected from probability distributions which have means and variances equal to a hypothetical open array that would give the means and variances of the data on closure. This hypothetical open array is called the Chayes array. Alternatively, the population of random closed arrays can be drawn from the compositional space available to rock-forming processes. The minerals comprising the available space can be described with one additive component per mineral phase and a small number of exchange components. This space is called Thompson space. Statistics based on either space lead to the conclusion that Pearce element ratios are statistically valid and that Pearce element diagrams depict the processes that create chemical inhomogeneities in igneous rock suites.
Accuracy of diagnostic tests to detect asymptomatic bacteriuria during pregnancy.
Mignini, Luciano; Carroli, Guillermo; Abalos, Edgardo; Widmer, Mariana; Amigot, Susana; Nardin, Juan Manuel; Giordano, Daniel; Merialdi, Mario; Arciero, Graciela; Del Carmen Hourquescos, Maria
2009-02-01
A dipslide is a plastic paddle coated with agar that is attached to a plastic cap that screws onto a sterile plastic vial. Our objective was to estimate the diagnostic accuracy of the dipslide culture technique to detect asymptomatic bacteriuria during pregnancy and to evaluate the accuracy of nitrate and leucocyte esterase dipslides for screening. This was an ancillary study within a trial comparing single-day with 7-day therapy in treating asymptomatic bacteriuria. Clean-catch midstream samples were collected from pregnant women seeking routine care. Positive and negative likelihood ratios and sensitivity and specificity for the culture-based dipslide to detect and chemical dipsticks (nitrites, leukocyte esterase, or both) to screen were estimated using traditional urine culture as the "gold standard." : A total of 3,048 eligible pregnant women were screened. The prevalence of asymptomatic bacteriuria was 15%, with Escherichia coli the most prevalent organism. The likelihood ratio for detecting asymptomatic bacteriuria with a positive dipslide test was 225 (95% confidence interval [CI] 113-449), increasing the probability of asymptomatic bacteriuria to 98%; the likelihood ratio for a negative dipslide test was 0.02 (95% CI 0.01-0.05), reducing the probability of bacteriuria to less than 1%. The positive likelihood ratio of leukocyte esterase and nitrite dipsticks (when both or either one was positive) was 6.95 (95% CI 5.80-8.33), increasing the probability of bacteriuria to only 54%; the negative likelihood ratio was 0.50 (95% CI 0.45-0.57), reducing the probability to 8%. A pregnant woman with a positive dipslide test is very likely to have a definitive diagnosis of asymptomatic bacteriuria, whereas a negative result effectively rules out the presence of bacteriuria. Dipsticks that measure nitrites and leukocyte esterase have low sensitivity for use in screening for asymptomatic bacteriuria during gestation. ISRCTN, isrctn.org, 1196608 II.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
NASA Astrophysics Data System (ADS)
Zhang, H.; Guan, Z. W.; Wang, Q. Y.; Liu, Y. J.; Li, J. K.
2018-05-01
The effects of microstructure and stress ratio on high cycle fatigue of nickel superalloy Nimonic 80A were investigated. The stress ratios of 0.1, 0.5 and 0.8 were chosen to perform fatigue tests in a frequency of 110 Hz. Cleavage failure was observed, and three competing failure crack initiation modes were discovered by a scanning electron microscope, which were classified as surface without facets, surface with facets and subsurface with facets. With increasing the stress ratio from 0.1 to 0.8, the occurrence probability of surface and subsurface with facets also increased and reached the maximum value at R = 0.5, meanwhile the probability of surface initiation without facets decreased. The effect of microstructure on the fatigue fracture behavior at different stress ratios was also observed and discussed. Based on the Goodman diagram, it was concluded that the fatigue strength of 50% probability of failure at R = 0.1, 0.5 and 0.8 is lower than the modified Goodman line.
Computerized Classification Testing with the Rasch Model
ERIC Educational Resources Information Center
Eggen, Theo J. H. M.
2011-01-01
If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…
Goldstein, Cathy A; Karnib, Hala; Williams, Katherine; Virk, Zunaira; Shamim-Uzzaman, Afifa
2017-11-22
Home sleep apnea tests (HSATs) are an alternative to attended polysomnograms (PSGs) when the pre-test probability for moderate to severe OSA is high. However, insurers often mandate use anytime OSA is suspected regardless of the pre-test probability. Our objective was to determine the ability of HSATs to rule in OSA when the pre-test probability of an apnea hypopnea index (AHI) in the moderate to severe range is low. Patients who underwent HSATs were characterized as low or high pre-test probability based on the presence of two symptoms of the STOP instrument plus either BMI > 35 or male gender. The odds of HSAT diagnostic for OSA dependent on pre-test probability was calculated. Stepwise selection determined predictors of non-diagnostic HSAT. As PSG is performed after HSATs that do not confirm OSA, false negative results were assessed. Among 196 individuals, pre-test probability was low in 74 (38%) and high in 122 (62%). A lower percentage of individuals with a low versus high pre-test probability for moderate to severe OSA had HSAT results that confirmed OSA (61 versus 84%, p = 0.0002) resulting in an odds ratio (OR) of 0.29 for confirmatory HSAT in the low pre-test probability group (95% CI [0.146, 0.563]). Multivariate logistic regression demonstrated that age ≤ 50 (OR 3.10 [1.24-7.73]), female gender (OR 3.58[1.50-8.66]), non-enlarged neck circumference (OR 11.50 [2.50-52.93]), and the absence of loud snoring (OR 3.47 [1.30-9.25]) best predicted non-diagnostic HSAT. OSA was diagnosed by PSG in 54% of individuals with negative HSAT which was similar in both pre-test probability groups. HSATs should be reserved for individuals with high pre-test probability for moderate to severe disease as opposed to any individual with suspected OSA.
Sharing the Diagnostic Process in the Clinical Teaching Environment: A Case Study
ERIC Educational Resources Information Center
Cuello-Garcia; Carlos
2005-01-01
Revealing or visualizing the thinking involved in making clinical decisions is a challenge. A case study is presented with a visual implement for sharing the diagnostic process. This technique adapts the Bayesian approach to the case presentation. Pretest probabilities and likelihood ratios are gathered to obtain post-test probabilities of every…
Spiegelhalter, David; Grigg, Olivia; Kinsman, Robin; Treasure, Tom
2003-02-01
To investigate the use of the risk-adjusted sequential probability ratio test in monitoring the cumulative occurrence of adverse clinical outcomes. Retrospective analysis of three longitudinal datasets. Patients aged 65 years and over under the care of Harold Shipman between 1979 and 1997, patients under 1 year of age undergoing paediatric heart surgery in Bristol Royal Infirmary between 1984 and 1995, adult patients receiving cardiac surgery from a team of cardiac surgeons in London,UK. Annual and 30-day mortality rates. Using reasonable boundaries, the procedure could have indicated an 'alarm' in Bristol after publication of the 1991 Cardiac Surgical Register, and in 1985 or 1997 for Harold Shipman depending on the data source and the comparator. The cardiac surgeons showed no significant deviation from expected performance. The risk-adjusted sequential probability test is simple to implement, can be applied in a variety of contexts, and might have been useful to detect specific instances of past divergent performance. The use of this and related techniques deserves further attention in the context of prospectively monitoring adverse clinical outcomes.
Statistical analyses of the relative risk.
Gart, J J
1979-01-01
Let P1 be the probability of a disease in one population and P2 be the probability of a disease in a second population. The ratio of these quantities, R = P1/P2, is termed the relative risk. We consider first the analyses of the relative risk from retrospective studies. The relation between the relative risk and the odds ratio (or cross-product ratio) is developed. The odds ratio can be considered a parameter of an exponential model possessing sufficient statistics. This permits the development of exact significance tests and confidence intervals in the conditional space. Unconditional tests and intervals are also considered briefly. The consequences of misclassification errors and ignoring matching or stratifying are also considered. The various methods are extended to combination of results over the strata. Examples of case-control studies testing the association between HL-A frequencies and cancer illustrate the techniques. The parallel analyses of prospective studies are given. If P1 and P2 are small with large samples sizes the appropriate model is a Poisson distribution. This yields a exponential model with sufficient statistics. Exact conditional tests and confidence intervals can then be developed. Here we consider the case where two populations are compared adjusting for sex differences as well as for the strata (or covariate) differences such as age. The methods are applied to two examples: (1) testing in the two sexes the ratio of relative risks of skin cancer in people living in different latitudes, and (2) testing over time the ratio of the relative risks of cancer in two cities, one of which fluoridated its drinking water and one which did not. PMID:540589
Karahan Şen, Nazlı Pınar; Bekiş, Recep; Ceylan, Ali; Derebek, Erkan
2016-07-01
Myocardial perfusion scintigraphy (MPS) is a diagnostic test which is frequently used in the diagnosis of coronary heart disease (CHD). MPS is generally interpreted as ischemia present or absent; however, it has a power in predicting the disease, similar to other diagnostic tests. In this study, we aimed to assist in directing the high-risk patients to undergo coronary angiography (CA) primarily by evaluating patients without prior CHD history with pre-test and post-test probabilities. The study was designed as a retrospective study. Between January 2008 and July 2011, 139 patients with positive MPS results and followed by CA recently (<6 months) were evaluated from patient files. Patients' pre-test probabilities based on the Diamond and Forrester method and the likelihood ratios that were obtained from the literature were used to calculate the patients' post exercise and post-MPS probabilities. Patients were evaluated in risk groups as low, intermediate, and high, and an ROC curve analysis was performed for the post-MPS probabilities. Coronary artery stenosis (CAS) was determined in 59 patients (42.4%). A significant difference was determined between the risk groups according to CAS, both for the pre-test and post-test probabilities (p<0.001, p=0.024). The ROC analysis provided a cut-off value of 80.4% for post- MPS probability in predicting CAS with 67.9% sensitivity and 77.8% specificity. When the post-MPS probability is ≥80% in patients who have reversible perfusion defects on MPS, we suggest interpreting the MPS as "high probability positive" to improve the selection of true-positive patients to undergo CA, and these patients should be primarily recommended CA.
Şen, Nazlı Pınar Karahan; Bekiş, Recep; Ceylan, Ali; Derebek, Erkan
2016-01-01
Objective: Myocardial perfusion scintigraphy (MPS) is a diagnostic test which is frequently used in the diagnosis of coronary heart disease (CHD). MPS is generally interpreted as ischemia present or absent; however, it has a power in predicting the disease, similar to other diagnostic tests. In this study, we aimed to assist in directing the high-risk patients to undergo coronary angiography (CA) primarily by evaluating patients without prior CHD history with pre-test and post-test probabilities. Methods: The study was designed as a retrospective study. Between January 2008 and July 2011, 139 patients with positive MPS results and followed by CA recently (<6 months) were evaluated from patient files. Patients’ pre-test probabilities based on the Diamond and Forrester method and the likelihood ratios that were obtained from the literature were used to calculate the patients’ post-exercise and post-MPS probabilities. Patients were evaluated in risk groups as low, intermediate, and high, and an ROC curve analysis was performed for the post-MPS probabilities. Results: Coronary artery stenosis (CAS) was determined in 59 patients (42.4%). A significant difference was determined between the risk groups according to CAS, both for the pre-test and post-test probabilities (p<0.001, p=0.024). The ROC analysis provided a cut-off value of 80.4% for post-MPS probability in predicting CAS with 67.9% sensitivity and 77.8% specificity. Conclusion: When the post-MPS probability is ≥80% in patients who have reversible perfusion defects on MPS, we suggest interpreting the MPS as “high probability positive” to improve the selection of true-positive patients to undergo CA, and these patients should be primarily recommended CA. PMID:27004704
Mutual Information Item Selection in Adaptive Classification Testing
ERIC Educational Resources Information Center
Weissman, Alexander
2007-01-01
A general approach for item selection in adaptive multiple-category classification tests is provided. The approach uses mutual information (MI), a special case of the Kullback-Leibler distance, or relative entropy. MI works efficiently with the sequential probability ratio test and alleviates the difficulties encountered with using other local-…
Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han
2011-01-01
The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143
Krill, Michael K; Rosas, Samuel; Kwon, KiHyun; Dakkak, Andrew; Nwachukwu, Benedict U; McCormick, Frank
2018-02-01
The clinical examination of the shoulder joint is an undervalued diagnostic tool for evaluating acromioclavicular (AC) joint pathology. Applying evidence-based clinical tests enables providers to make an accurate diagnosis and minimize costly imaging procedures and potential delays in care. The purpose of this study was to create a decision tree analysis enabling simple and accurate diagnosis of AC joint pathology. A systematic review of the Medline, Ovid and Cochrane Review databases was performed to identify level one and two diagnostic studies evaluating clinical tests for AC joint pathology. Individual test characteristics were combined in series and in parallel to improve sensitivities and specificities. A secondary analysis utilized subjective pre-test probabilities to create a clinical decision tree algorithm with post-test probabilities. The optimal special test combination to screen and confirm AC joint pathology combined Paxinos sign and O'Brien's Test, with a specificity of 95.8% when performed in series; whereas, Paxinos sign and Hawkins-Kennedy Test demonstrated a sensitivity of 93.7% when performed in parallel. Paxinos sign and O'Brien's Test demonstrated the greatest positive likelihood ratio (2.71); whereas, Paxinos sign and Hawkins-Kennedy Test reported the lowest negative likelihood ratio (0.35). No combination of special tests performed in series or in parallel creates more than a small impact on post-test probabilities to screen or confirm AC joint pathology. Paxinos sign and O'Brien's Test is the only special test combination that has a small and sometimes important impact when used both in series and in parallel. Physical examination testing is not beneficial for diagnosis of AC joint pathology when pretest probability is unequivocal. In these instances, it is of benefit to proceed with procedural tests to evaluate AC joint pathology. Ultrasound-guided corticosteroid injections are diagnostic and therapeutic. An ultrasound-guided AC joint corticosteroid injection may be an appropriate new standard for treatment and surgical decision-making. II - Systematic Review.
A readers' guide to the interpretation of diagnostic test properties: clinical example of sepsis.
Fischer, Joachim E; Bachmann, Lucas M; Jaeschke, Roman
2003-07-01
One of the most challenging practical and daily problems in intensive care medicine is the interpretation of the results from diagnostic tests. In neonatology and pediatric intensive care the early diagnosis of potentially life-threatening infections is a particularly important issue. A plethora of tests have been suggested to improve diagnostic decision making in the clinical setting of infection which is a clinical example used in this article. Several criteria that are critical to evidence-based appraisal of published data are often not adhered to during the study or in reporting. To enhance the critical appraisal on articles on diagnostic tests we discuss various measures of test accuracy: sensitivity, specificity, receiver operating characteristic curves, positive and negative predictive values, likelihood ratios, pretest probability, posttest probability, and diagnostic odds ratio. We suggest the following minimal requirements for reporting on the diagnostic accuracy of tests: a plot of the raw data, multilevel likelihood ratios, the area under the receiver operating characteristic curve, and the cutoff yielding the highest discriminative ability. For critical appraisal it is mandatory to report confidence intervals for each of these measures. Moreover, to allow comparison to the readers' patient population authors should provide data on study population characteristics, in particular on the spectrum of diseases and illness severity.
Avilés-Santa, M Larissa; Schneiderman, Neil; Savage, Peter J; Kaplan, Robert C; Teng, Yanping; Pérez, Cynthia M; Suárez, Erick L; Cai, Jianwen; Giachello, Aida L; Talavera, Gregory A; Cowie, Catherine C
2016-10-01
The aim of this study was to compare the ability of American Diabetes Association (ADA) diagnostic criteria to identify U.S. Hispanics/Latinos from diverse heritage groups with probable diabetes mellitus and assess cardiovascular risk factor correlates of those criteria. Cross-sectional analysis of data from 15,507 adults from 6 Hispanic/Latino heritage groups, enrolled in the Hispanic Community Health Study/Study of Latinos. The prevalence of probable diabetes mellitus was estimated using individual or combinations of ADA-defined cut points. The sensitivity and specificity of these criteria at identifying diabetes mellitus from ADA-defined prediabetes and normoglycemia were evaluated. Prevalence ratios of hypertension, abnormal lipids, and elevated urinary albumin-creatinine ratio for unrecognized diabetes mellitus-versus prediabetes and normoglycemia-were calculated. Among Hispanics/Latinos (mean age, 43 years) with diabetes mellitus, 39.4% met laboratory test criteria for probable diabetes, and the prevalence varied by heritage group. Using the oral glucose tolerance test as the gold standard, the sensitivity of fasting plasma glucose (FPG) and hemoglobin A1c-alone or in combination-was low (18, 23, and 33%, respectively) at identifying probable diabetes mellitus. Individuals who met any criterion for probable diabetes mellitus had significantly higher (P<.05) prevalence of most cardiovascular risk factors than those with normoglycemia or prediabetes, and this association was not modified by Hispanic/Latino heritage group. FPG and hemoglobin A1c are not sensitive (but are highly specific) at detecting probable diabetes mellitus among Hispanics/Latinos, independent of heritage group. Assessing cardiovascular risk factors at diagnosis might prompt multitarget interventions and reduce health complications in this young population. 2hPG = 2-hour post-glucose load plasma glucose ADA = American Diabetes Association BMI = body mass index CV = cardiovascular FPG = fasting plasma glucose HbA1c = hemoglobin A1c HCHS/SOL = Hispanic Community Health Study/Study of Latinos HDL-C = high-density-lipoprotein cholesterol NGT = normal glucose tolerance NHANES = National Health and Nutrition Examination Survey OGTT = oral glucose tolerance test TG = triglyceride UACR = urine albumin-creatinine ratio.
Hara, Masayasu; Kanemitsu, Yukihide; Hirai, Takashi; Komori, Koji; Kato, Tomoyuki
2008-11-01
This study was designed to determine the efficacy of carcinoembryonic antigen (CEA) monitoring for screening patients with colorectal cancer by using posttest probability of recurrence. For this study, 348 (preoperative serum CEA level elevated: CEA+, n = 119; or normal: CEA-, n = 229) patients who had undergone potentially curative surgery for colorectal cancer were enrolled. After five-year follow-up with measurements of serum CEA levels and imaging workup, posttest probabilities of recurrence were calculated. Recurrence was observed in 39 percent of CEA+ patients and 30 percent in CEA- patients, and CEA levels were elevated in 33.3 percent of CEA+ patients and 17.5 percent of CEA- patients. With obtained sensitivity (68.4 percent, CEA+; 41 percent, CEA-), specificity (83 percent, CEA+; 91 percent, CEA-) and likelihood ratio (test positive: 4.0, CEA+; 4.4, CEA-; and test negative: 0.38, CEA+; 0.66, CEA-), posttest probability given the presence of CEA elevation in the CEA+ and CEA- was 72.2 and 65.5 percent, respectively, and that given the absence of CEA elevation was 20 and 22.2 percent, respectively. Whereas postoperative CEA elevation indicates recurrence with high probability, a normal postoperative CEA is not useful for excluding the probability of recurrence.
EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.
ERIC Educational Resources Information Center
Frick, Theodore W.; And Others
Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…
Optimum detection of tones transmitted by a spacecraft
NASA Technical Reports Server (NTRS)
Simon, M. K.; Shihabi, M. M.; Moon, T.
1995-01-01
The performance of a scheme proposed for automated routine monitoring of deep-space missions is presented. The scheme uses four different tones (sinusoids) transmitted from the spacecraft (S/C) to a ground station with the positive identification of each of them used to indicate different states of the S/C. Performance is measured in terms of detection probability versus false alarm probability with detection signal-to-noise ratio as a parameter. The cases where the phase of the received tone is unknown and where both the phase and frequency of the received tone are unknown are treated separately. The decision rules proposed for detecting the tones are formulated from average-likelihood ratio and maximum-likelihood ratio tests, the former resulting in optimum receiver structures.
Generalized likelihood ratios for quantitative diagnostic test scores.
Tandberg, D; Deely, J J; O'Malley, A J
1997-11-01
The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.
Likelihood Ratios for Glaucoma Diagnosis Using Spectral Domain Optical Coherence Tomography
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M.; Weinreb, Robert N.; Medeiros, Felipe A.
2014-01-01
Purpose To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral domain optical coherence tomography (spectral-domain OCT). Design Observational cohort study. Methods 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the Receiver Operating Characteristic (ROC) curve. Results Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86μm were associated with positive LRs, i.e., LRs greater than 1; whereas RNFL thickness values higher than 86μm were associated with negative LRs, i.e., LRs smaller than 1. A modified Fagan nomogram was provided to assist calculation of post-test probability of disease from the calculated likelihood ratios and pretest probability of disease. Conclusion The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision-making. PMID:23972303
Akobeng, Anthony K
2018-04-29
The faecal calprotectin (FC) test is increasingly being used in clinical practice to help select children with gastrointestinal symptoms who might have inflammatory bowel disease (IBD) and benefit from endoscopies. We provide an overview of the advantages and limitations of the FC test. PubMed was searched for meta-analyses that had investigated the diagnostic accuracy of the FC test and the pooled sensitivity and specificity for distinguishing IBD from non-IBD patients were used to calculate likelihood ratios (LR). These were applied to practical examples to explain how easily clinicians can use the results to modify pre-test probabilities of IBD and generate post-test probabilities for IBD. The positive LR and negative LR of the FC test were 2.8 and 0.015, respectively. The usefulness of the FC test depended on the pre-test probability of IBD. When the pre-test probability of IBD was low, a positive FC test did not necessarily indicate IBD. However, because of the very small negative LR, a negative FC result virtually ruled out IBD in most cases. The FC test should not be used indiscriminately in children with gastrointestinal symptoms but should be targeted at those who are likely to have IBD. ©2018 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Interpreting DNA mixtures with the presence of relatives.
Hu, Yue-Qing; Fung, Wing K
2003-02-01
The assessment of DNA mixtures with the presence of relatives is discussed in this paper. The kinship coefficients are incorporated into the evaluation of the likelihood ratio and we first derive a unified expression of joint genotypic probabilities. A general formula and seven types of detailed expressions for calculating likelihood ratios are then developed for the case that a relative of the tested suspect is an unknown contributor to the mixed stain. These results can also be applied to the case of a non-tested suspect with one tested relative. Moreover, the formula for calculating the likelihood ratio when there are two related unknown contributors is given. Data for a real situation are given for illustration, and the effect of kinship on the likelihood ratio is shown therein. Some interesting findings are obtained.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…
Some tests on small-scale rectangular throat ejector. [thrust augmentation for V/STOL aircraft
NASA Technical Reports Server (NTRS)
Dean, W. N., Jr.; Franke, M. E.
1979-01-01
A small scale rectangular throat ejector with plane slot nozzles and a fixed throat area was tested to determine the effects of diffuser sidewall length, diffuser area ratio, and sidewall nozzle position on thrust and mass augmentation. The thrust augmentation ratio varied from approximately 0.9 to 1.1. Although the ejector did not have good thrust augmentation performance, the effects of the parameters studied are believed to indicate probable trends in thrust augmenting ejectors.
Likelihood ratios for glaucoma diagnosis using spectral-domain optical coherence tomography.
Lisboa, Renato; Mansouri, Kaweh; Zangwill, Linda M; Weinreb, Robert N; Medeiros, Felipe A
2013-11-01
To present a methodology for calculating likelihood ratios for glaucoma diagnosis for continuous retinal nerve fiber layer (RNFL) thickness measurements from spectral-domain optical coherence tomography (spectral-domain OCT). Observational cohort study. A total of 262 eyes of 187 patients with glaucoma and 190 eyes of 100 control subjects were included in the study. Subjects were recruited from the Diagnostic Innovations Glaucoma Study. Eyes with preperimetric and perimetric glaucomatous damage were included in the glaucoma group. The control group was composed of healthy eyes with normal visual fields from subjects recruited from the general population. All eyes underwent RNFL imaging with Spectralis spectral-domain OCT. Likelihood ratios for glaucoma diagnosis were estimated for specific global RNFL thickness measurements using a methodology based on estimating the tangents to the receiver operating characteristic (ROC) curve. Likelihood ratios could be determined for continuous values of average RNFL thickness. Average RNFL thickness values lower than 86 μm were associated with positive likelihood ratios (ie, likelihood ratios greater than 1), whereas RNFL thickness values higher than 86 μm were associated with negative likelihood ratios (ie, likelihood ratios smaller than 1). A modified Fagan nomogram was provided to assist calculation of posttest probability of disease from the calculated likelihood ratios and pretest probability of disease. The methodology allowed calculation of likelihood ratios for specific RNFL thickness values. By avoiding arbitrary categorization of test results, it potentially allows for an improved integration of test results into diagnostic clinical decision making. Copyright © 2013. Published by Elsevier Inc.
Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A
2015-05-01
As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.
Change-in-ratio estimators for populations with more than two subclasses
Udevitz, Mark S.; Pollock, Kenneth H.
1991-01-01
Change-in-ratio methods have been developed to estimate the size of populations with two or three population subclasses. Most of these methods require the often unreasonable assumption of equal sampling probabilities for individuals in all subclasses. This paper presents new models based on the weaker assumption that ratios of sampling probabilities are constant over time for populations with three or more subclasses. Estimation under these models requires that a value be assumed for one of these ratios when there are two samples. Explicit expressions are given for the maximum likelihood estimators under models for two samples with three or more subclasses and for three samples with two subclasses. A numerical method using readily available statistical software is described for obtaining the estimators and their standard errors under all of the models. Likelihood ratio tests that can be used in model selection are discussed. Emphasis is on the two-sample, three-subclass models for which Monte-Carlo simulation results and an illustrative example are presented.
Garway-Heath, David F
2008-01-01
This chapter reviews the evidence for the clinical application of vision function tests and imaging devices to identify early glaucoma, and sets out a scheme for the appropriate use and interpretation of test results in screening/case-finding and clinic settings. In early glaucoma, signs may be equivocal and the diagnosis is often uncertain. Either structural damage or vision function loss may be the first sign of glaucoma; neither one is consistently apparent before the other. Quantitative tests of visual function and measurements of optic-nerve head and retinal nerve fiber layer anatomy are useful to either raise or lower the probability that glaucoma is present. The posttest probability for glaucoma may be calculated from the pretest probability and the likelihood ratio of the diagnostic criterion, and the output of several diagnostic devices may be combined to achieve a final probability. However, clinicians need to understand how these diagnostic devices make their measurements, so that the validity of each test result can be adequately assessed. Only then should the result be used, together with the patient history and clinical examination, to derive a diagnosis.
The meaning of diagnostic test results: a spreadsheet for swift data analysis.
Maceneaney, P M; Malone, D E
2000-03-01
To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. Microsoft Excel(TM)was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. This spreadsheet can be used on desktop and palmtop computers. The MS Excel(TM)version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. Copyright 2000 The Royal College of Radiologists.
Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F
2013-11-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
Li, Min; Du, Xiang-Min; Jin, Zhi-Tao; Peng, Zhao-Hui; Ding, Juan; Li, Li
2014-01-01
To comprehensively investigate the diagnostic performance of coronary artery angiography with 64-MDCT and post 64-MDCT. PubMed was searched for all published studies that evaluated coronary arteries with 64-MDCT and post 64-MDCT. The clinical diagnostic role was evaluated by applying the likelihood ratios (LRs) to calculate the post-test probability based on Bayes' theorem. 91 studies that met our inclusion criteria were ultimately included in the analysis. The pooled positive and negative LRs at patient level were 8.91 (95% CI, 7.53, 10.54) and 0.02 (CI, 0.01, 0.03), respectively. For studies that did not claim that non-evaluable segments were included, the pooled positive and negative LRs were 11.16 (CI, 8.90, 14.00) and 0.01 (CI, 0.01, 0.03), respectively. For studies including uninterruptable results, the diagnostic performance decreased, with the pooled positive LR 7.40 (CI, 6.00, 9.13) and negative LR 0.02 (CI, 0.01, 0.03). The areas under the summary ROC curve were 0.98 (CI, 0.97 to 0.99) for 64-MDCT and 0.96 (CI, 0.94 to 0.98) for post 64-MDCT, respectively. For references explicitly stating that the non-assessable segments were included during analysis, a post-test probability of negative results >95% and a positive post-test probability <95% could be obtained for patients with a pre-test probability of <73% for coronary artery disease (CAD). On the other hand, when the pre-test probability of CAD was >73%, the diagnostic role was reversed, with a positive post-test probability of CAD >95% and a negative post-test probability of CAD <95%. The diagnostic performance of post 64-MDCT does not increase as compared with 64-MDCT. CTA, overall, is a test of exclusion for patients with a pre-test probability of CAD<73%, while for patients with a pre-test probability of CAD>73%, CTA is a test used to confirm the presence of CAD.
Is intestinal biopsy always needed for diagnosis of celiac disease?
Scoglio, Riccardo; Di Pasquale, Giuseppe; Pagano, Giuseppe; Lucanto, Maria Cristina; Magazzù, Giuseppe; Sferlazzas, Concetta
2003-06-01
Intestinal biopsy is required for a diagnosis of celiac disease (CD). The aim of this study was to assess diagnostic accuracy of transglutaminase antibodies (TGA) in comparison and in association with that of antiemdomysial antibodies (AEA), calculating the post-test odds of having the disease, to verify whether some patients might avoid undergoing intestinal biopsy for a diagnosis of CD. A total of 181 consecutive patients (131 < 18 yr), referred to our celiac clinic by primary care physicians for suspect CD. Overall diagnostic accuracy, negative predictive value, and likelihood ratio (LR) were calculated both for each serological test and for serial testing (TGA and after AEA, assuming the post-test probability of TGA as pretest probability of AEA). Both serological determination and histological evaluation were blindly performed. Histology of duodenal mucosa was considered the gold standard. The overall accuracy of TGA and of AEA were 92.8% (89.1-96.6) and 93.4% (89.7-97.0), respectively. The negative predictive value of TGA and AEA were 97.2% (91.9-102.6) and 87.2% (77.7-96.8), respectively. Positive likelihood ratios for TGA and AEA were 3.89 (3.40-4.38) and 7.48 (6.73-8.23), respectively. Serial testing, in groups of patients with prevalence of CD estimated higher than 75%, such as those with classic symptoms of CD, would provide a post-test probability of more than 99%. Our results suggest that serial testing with TGA and AEA might allow, in some cases, the avoidance of intestinal biopsy to confirm the diagnosis of CD.
Diagnostic testing for coagulopathies in patients with ischemic stroke.
Bushnell, C D; Goldstein, L B
2000-12-01
Hypercoagulable states are a recognized, albeit uncommon, etiology of ischemic stroke. It is unclear how often the results of specialized coagulation tests affect management. Using data compiled from a systematic review of available studies, we employed quantitative methodology to assess the diagnostic yield of coagulation tests for identification of coagulopathies in ischemic stroke patients. We performed a MEDLINE search to identify controlled studies published during 1966-1999 that reported the prevalence of deficiencies of protein C, protein S, antithrombin III, plasminogen, activated protein C resistance (APCR)/factor V Leiden mutation (FVL), anticardiolipin antibodies (ACL), or lupus anticoagulant (LA) in patients with ischemic stroke. The cumulative prevalence rates (pretest probabilities) and positive likelihood ratios for all studies and for those including only patients aged =50 years were used to calculate posttest probabilities for each coagulopathy, reflecting diagnostic yield. The cumulative pretest probabilities of coagulation defects in ischemic stroke patients are as follows: LA, 3% (8% for those aged =50 years); ACL, 17% (21% for those aged =50 years); APCR/FVL, 7% (11% for those aged =50 years); and prothrombin mutation, 4.5% (5.7% for those aged =50 years). The posttest probabilities of ACL, LA, and APCR increased with increasing pretest probability, the specificity of the tests, and features of the patients' history and clinical presentation. The pretest probabilities of coagulation defects in ischemic stroke patients are low. The diagnostic yield of coagulation tests may be increased by using tests with the highest specificities and by targeting patients with clinical or historical features that increase pretest probability. Consideration of these data might lead to more rational ordering of tests and an associated cost savings.
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
NASA Astrophysics Data System (ADS)
Barkley, Brett E.
A cooperative detection and tracking algorithm for multiple targets constrained to a road network is presented for fixed-wing Unmanned Air Vehicles (UAVs) with a finite field of view. Road networks of interest are formed into graphs with nodes that indicate the target likelihood ratio (before detection) and position probability (after detection). A Bayesian likelihood ratio tracker recursively assimilates target observations until the cumulative observations at a particular location pass a detection criterion. At this point, a target is considered detected and a position probability is generated for the target on the graph. Data association is subsequently used to route future measurements to update the likelihood ratio tracker (for undetected target) or to update a position probability (a previously detected target). Three strategies for motion planning of UAVs are proposed to balance searching for new targets with tracking known targets for a variety of scenarios. Performance was tested in Monte Carlo simulations for a variety of mission parameters, including tracking on road networks with varying complexity and using UAVs at various altitudes.
Predicting non-square 2D dice probabilities
NASA Astrophysics Data System (ADS)
Pender, G. A. T.; Uhrin, M.
2014-07-01
The prediction of the final state probabilities of a general cuboid randomly thrown onto a surface is a problem that naturally arises in the minds of men and women familiar with regular cubic dice and the basic concepts of probability. Indeed, it was considered by Newton in 1664 (Newton 1967 The Mathematical Papers of Issac Newton vol I (Cambridge: Cambridge University Press) pp 60-1). In this paper we make progress on the 2D problem (which can be realized in 3D by considering a long cuboid, or alternatively a rectangular cross-sectioned dreidel). For the two-dimensional case we suggest that the ratio of the probabilities of landing on each of the two sides is given by \\frac{\\sqrt{{{k}^{2}}+{{l}^{2}}}-k}{\\sqrt{{{k}^{2}}+{{l}^{2}}}-l}\\frac{arctan \\frac{l}{k}}{arctan \\frac{k}{l}} where k and l are the lengths of the two sides. We test this theory both experimentally and computationally, and find good agreement between our theory, experimental and computational results. Our theory is known, from its derivation, to be an approximation for particularly bouncy or ‘grippy’ surfaces where the die rolls through many revolutions before settling. On real surfaces we would expect (and we observe) that the true probability ratio for a 2D die is a somewhat closer to unity than predicted by our theory. This problem may also have wider relevance in the testing of physics engines.
Descatha, A; Dale, A-M; Franzblau, A; Coomes, J; Evanoff, B
2010-02-01
We evaluated the utility of physical examination manoeuvres in the prediction of carpal tunnel syndrome (CTS) in a population-based research study. We studied a cohort of 1108 newly employed workers in several industries. Each worker completed a symptom questionnaire, a structured physical examination and nerve conduction study. For each hand, our CTS case definition required both median nerve conduction abnormality and symptoms classified as "classic" or "probable" on a hand diagram. We calculated the positive predictive values and likelihood ratios for physical examination manoeuvres in subjects with and without symptoms. The prevalence of CTS in our cohort was 1.2% for the right hand and 1.0% for the left hand. The likelihood ratios of a positive test for physical provocative tests ranged from 2.0 to 3.3, and those of a negative test from 0.3 to 0.9. The post-test probability of positive testing was <50% for all strategies tested. Our study found that physical examination, alone or in combination with symptoms, was not predictive of CTS in a working population. We suggest using specific symptoms as a first-level screening tool, and nerve conduction study as a confirmatory test, as a case definition strategy in research settings.
Assessment of different models for computing the probability of a clear line of sight
NASA Astrophysics Data System (ADS)
Bojin, Sorin; Paulescu, Marius; Badescu, Viorel
2017-12-01
This paper is focused on modeling the morphological properties of the cloud fields in terms of the probability of a clear line of sight (PCLOS). PCLOS is defined as the probability that a line of sight between observer and a given point of the celestial vault goes freely without intersecting a cloud. A variety of PCLOS models assuming the cloud shape hemisphere, semi-ellipsoid and ellipsoid are tested. The effective parameters (cloud aspect ratio and absolute cloud fraction) are extracted from high-resolution series of sunshine number measurements. The performance of the PCLOS models is evaluated from the perspective of their ability in retrieving the point cloudiness. The advantages and disadvantages of the tested models are discussed, aiming to a simplified parameterization of PCLOS models.
ERIC Educational Resources Information Center
Instructional Objectives Exchange, Los Angeles, CA.
To help classroom teachers in grades K-9 construct mathematics tests, fifteen general objectives, corresponding sub-objectives, sample test items, and answers are presented. In general, sub-objectives are arranged in increasing order of difficulty. The objectives were written to comprehensively cover three categories. The first, graphs, covers the…
Prediction of hamstring injury in professional soccer players by isokinetic measurements
Dauty, Marc; Menu, Pierre; Fouasson-Chailloux, Alban; Ferréol, Sophie; Dubois, Charles
2016-01-01
Summary Objectives previous studies investigating the ability of isokinetic strength ratios to predict hamstring injuries in soccer players have reported conflicting results. Hypothesis to determine if isokinetic ratios are able to predict hamstring injury occurring during the season in professional soccer players. Study Design case-control study; Level of evidence: 3. Methods from 2001 to 2011, 350 isokinetic tests were performed in 136 professional soccer players at the beginning of the soccer season. Fifty-seven players suffered hamstring injury during the season that followed the isokinetic tests. These players were compared with the 79 uninjured players. The bilateral concentric ratio (hamstring-to-hamstring), ipsilateral concentric ratio (hamstring-to-quadriceps), and mixed ratio (eccentric/concentric hamstring-to-quadriceps) were studied. The predictive ability of each ratio was established based on the likelihood ratio and post-test probability. Results the mixed ratio (30 eccentric/240 concentric hamstring-to-quadriceps) <0.8, ipsilateral ratio (180 concentric hamstring-to-quadriceps) <0.47, and bilateral ratio (60 concentric hamstring-to-hamstring) <0.85 were the most predictive of hamstring injury. The ipsilateral ratio <0.47 allowed prediction of the severity of the hamstring injury, and was also influenced by the length of time since administration of the isokinetic tests. Conclusion isokinetic ratios are useful for predicting the likelihood of hamstring injury in professional soccer players during the competitive season. PMID:27331039
Rational clinical evaluation of suspected acute coronary syndromes: The value of more information.
Hancock, David G; Chuang, Ming-Yu Anthony; Bystrom, Rebecca; Halabi, Amera; Jones, Rachel; Horsfall, Matthew; Cullen, Louise; Parsonage, William A; Chew, Derek P
2017-12-01
Many meta-analyses have provided synthesised likelihood ratio data to aid clinical decision-making. However, much less has been published on how to safely combine clinical information in practice. We aimed to explore the benefits and risks of pooling clinical information during the ED assessment of suspected acute coronary syndrome. Clinical information on 1776 patients was collected within a randomised trial conducted across five South Australian EDs between July 2011 and March 2013. Bayes theorem was used to calculate patient-specific post-test probabilities using age- and gender-specific pre-test probabilities and likelihood ratios corresponding to the presence or absence of 18 clinical factors. Model performance was assessed as the presence of adverse cardiac outcomes among patients theoretically discharged at a post-test probability less than 1%. Bayes theorem-based models containing high-sensitivity troponin T (hs-troponin) outperformed models excluding hs-troponin, as well as models utilising TIMI and GRACE scores. In models containing hs-troponin, a plateau in improving discharge safety was observed after the inclusion of four clinical factors. Models with fewer clinical factors better approximated the true event rate, tended to be safer and resulted in a smaller standard deviation in post-test probability estimates. We showed that there is a definable point where additional information becomes uninformative and may actually lead to less certainty. This evidence supports the concept that clinical decision-making in the assessment of suspected acute coronary syndrome should be focused on obtaining the least amount of information that provides the highest benefit for informing the decisions of admission or discharge. © 2017 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
2001-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
Ultrasensitive surveillance of sensors and processes
Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.
1999-01-01
A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.
NASA Astrophysics Data System (ADS)
Nitz, D. E.; Curry, J. J.; Buuck, M.; DeMann, A.; Mitchell, N.; Shull, W.
2018-02-01
We report radiative transition probabilities for 5029 emission lines of neutral cerium within the wavelength range 417-1110 nm. Transition probabilities for only 4% of these lines have been previously measured. These results are obtained from a Boltzmann analysis of two high resolution Fourier transform emission spectra used in previous studies of cerium, obtained from the digital archives of the National Solar Observatory at Kitt Peak. The set of transition probabilities used for the Boltzmann analysis are those published by Lawler et al (2010 J. Phys. B: At. Mol. Opt. Phys. 43 085701). Comparisons of branching ratios and transition probabilities for lines common to the two spectra provide important self-consistency checks and test for the presence of self-absorption effects. Estimated 1σ uncertainties for our transition probability results range from 10% to 18%.
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Diagnosing pulmonary embolisms: the clinician's point of view.
Carrillo Alcaraz, A; Martínez, A López; Solano, F J Sotos
Pulmonary thromboembolism is common and potentially severe. To ensure the correct approach to the diagnostic workup of pulmonary thromboembolism, it is essential to know the basic concepts governing the use of the different tests available. The diagnostic approach to pulmonary thromboembolism is an example of the application of the conditional probabilities of Bayes' theorem in daily practice. To interpret the available diagnostic tests correctly, it is necessary to analyze different concepts that are fundamental for decision making. Thus, it is necessary to know what the likelihood ratios, 95% confidence intervals, and decision thresholds mean. Whether to determine the D-dimer concentration or to do CT angiography or other imaging tests depends on their capacity to modify the pretest probability of having the disease to a posttest probability that is higher or lower than the thresholds for action. This review aims to clarify the diagnostic sequence of thromboembolic pulmonary disease, analyzing the main diagnostic tools (clinical examination, laboratory tests, and imaging tests), placing special emphasis on the principles that govern evidence-based medicine. Copyright © 2016 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.
Influence of pore structure on compressive strength of cement mortar.
Zhao, Haitao; Xiao, Qi; Huang, Donghui; Zhang, Shiping
2014-01-01
This paper describes an experimental investigation into the pore structure of cement mortar using mercury porosimeter. Ordinary Portland cement, manufactured sand, and natural sand were used. The porosity of the manufactured sand mortar is higher than that of natural sand at the same mix proportion; on the contrary, the probable pore size and threshold radius of manufactured sand mortar are finer. Besides, the probable pore size and threshold radius increased with increasing water to cement ratio and sand to cement ratio. In addition, the existing models of pore size distribution of cement-based materials have been reviewed and compared with test results in this paper. Finally, the extended Bhattacharjee model was built to examine the relationship between compressive strength and pore structure.
Influence of Pore Structure on Compressive Strength of Cement Mortar
Zhao, Haitao; Xiao, Qi; Huang, Donghui
2014-01-01
This paper describes an experimental investigation into the pore structure of cement mortar using mercury porosimeter. Ordinary Portland cement, manufactured sand, and natural sand were used. The porosity of the manufactured sand mortar is higher than that of natural sand at the same mix proportion; on the contrary, the probable pore size and threshold radius of manufactured sand mortar are finer. Besides, the probable pore size and threshold radius increased with increasing water to cement ratio and sand to cement ratio. In addition, the existing models of pore size distribution of cement-based materials have been reviewed and compared with test results in this paper. Finally, the extended Bhattacharjee model was built to examine the relationship between compressive strength and pore structure. PMID:24757414
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Physician Bayesian updating from personal beliefs about the base rate and likelihood ratio.
Rottman, Benjamin Margolin
2017-02-01
Whether humans can accurately make decisions in line with Bayes' rule has been one of the most important yet contentious topics in cognitive psychology. Though a number of paradigms have been used for studying Bayesian updating, rarely have subjects been allowed to use their own preexisting beliefs about the prior and the likelihood. A study is reported in which physicians judged the posttest probability of a diagnosis for a patient vignette after receiving a test result, and the physicians' posttest judgments were compared to the normative posttest calculated from their own beliefs in the sensitivity and false positive rate of the test (likelihood ratio) and prior probability of the diagnosis. On the one hand, the posttest judgments were strongly related to the physicians' beliefs about both the prior probability as well as the likelihood ratio, and the priors were used considerably more strongly than in previous research. On the other hand, both the prior and the likelihoods were still not used quite as much as they should have been, and there was evidence of other nonnormative aspects to the updating, such as updating independent of the likelihood beliefs. By focusing on how physicians use their own prior beliefs for Bayesian updating, this study provides insight into how well experts perform probabilistic inference in settings in which they rely upon their own prior beliefs rather than experimenter-provided cues. It suggests that there is reason to be optimistic about experts' abilities, but that there is still considerable need for improvement.
NASA Astrophysics Data System (ADS)
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.
Corson-Knowles, Daniel; Russell, Frances M
2018-05-01
Clinical ultrasound (CUS) is highly specific for the diagnosis of acute appendicitis but is operator-dependent. The goal of this study was to determine if a heterogeneous group of emergency physicians (EP) could diagnose acute appendicitis on CUS in patients with a moderate to high pre-test probability. This was a prospective, observational study of a convenience sample of adult and pediatric patients with suspected appendicitis. Sonographers received a structured, 20-minute CUS training on appendicitis prior to patient enrollment. The presence of a dilated (>6 mm diameter), non-compressible, blind-ending tubular structure was considered a positive study. Non-visualization or indeterminate studies were considered negative. We collected pre-test probability of acute appendicitis based on a 10-point visual analog scale (moderate to high was defined as >3), and confidence in CUS interpretation. The primary objective was measured by comparing CUS findings to surgical pathology and one week follow-up. We enrolled 105 patients; 76 had moderate to high pre-test probability. Of these, 24 were children. The rate of appendicitis was 36.8% in those with moderate to high pre-test probability. CUS were recorded by 33 different EPs. The sensitivity, specificity, and positive and negative likelihood ratios of EP-performed CUS in patients with moderate to high pre-test probability were 42.8% (95% confidence interval [CI] [25-62.5%]), 97.9% (95% CI [87.5-99.8%]), 20.7 (95% CI [2.8-149.9]) and 0.58 (95% CI [0.42-0.8]), respectively. The 16 false negative scans were all interpreted as indeterminate. There was one false positive CUS diagnosis; however, the sonographer reported low confidence of 2/10. A heterogeneous group of EP sonographers can safely identify acute appendicitis with high specificity in patients with moderate to high pre-test probability. This data adds support for surgical consultation without further imaging beyond CUS in the appropriate clinical setting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cikota, Aleksandar; Deustua, Susana; Marleau, Francine, E-mail: acikota@eso.org
We investigate limits on the extinction values of Type Ia supernovae (SNe Ia) to statistically determine the most probable color excess, E(B – V), with galactocentric distance, and use these statistics to determine the absorption-to-reddening ratio, R{sub V}, for dust in the host galaxies. We determined pixel-based dust mass surface density maps for 59 galaxies from the Key Insight on Nearby Galaxies: a Far-infrared Survey with Herschel (KINGFISH). We use SN Ia spectral templates to develop a Monte Carlo simulation of color excess E(B – V) with R{sub V} = 3.1 and investigate the color excess probabilities E(B – V) with projected radial galaxymore » center distance. Additionally, we tested our model using observed spectra of SN 1989B, SN 2002bo, and SN 2006X, which occurred in three KINGFISH galaxies. Finally, we determined the most probable reddening for Sa–Sap, Sab–Sbp, Sbc–Scp, Scd–Sdm, S0, and irregular galaxy classes as a function of R/R{sub 25}. We find that the largest expected reddening probabilities are in Sab–Sb and Sbc–Sc galaxies, while S0 and irregular galaxies are very dust poor. We present a new approach for determining the absorption-to-reddening ratio R{sub V} using color excess probability functions and find values of R{sub V} = 2.71 ± 1.58 for 21 SNe Ia observed in Sab–Sbp galaxies, and R{sub V} = 1.70 ± 0.38, for 34 SNe Ia observed in Sbc–Scp galaxies.« less
Order-restricted inference for means with missing values.
Wang, Heng; Zhong, Ping-Shou
2017-09-01
Missing values appear very often in many applications, but the problem of missing values has not received much attention in testing order-restricted alternatives. Under the missing at random (MAR) assumption, we impute the missing values nonparametrically using kernel regression. For data with imputation, the classical likelihood ratio test designed for testing the order-restricted means is no longer applicable since the likelihood does not exist. This article proposes a novel method for constructing test statistics for assessing means with an increasing order or a decreasing order based on jackknife empirical likelihood (JEL) ratio. It is shown that the JEL ratio statistic evaluated under the null hypothesis converges to a chi-bar-square distribution, whose weights depend on missing probabilities and nonparametric imputation. Simulation study shows that the proposed test performs well under various missing scenarios and is robust for normally and nonnormally distributed data. The proposed method is applied to an Alzheimer's disease neuroimaging initiative data set for finding a biomarker for the diagnosis of the Alzheimer's disease. © 2017, The International Biometric Society.
Physiological condition of autumn-banded mallards and its relationship to hunting vulnerability
Hepp, G.R.; Blohm, R.J.; Reynolds, R.E.; Hines, J.E.; Nichols, J.D.
1986-01-01
An important topic of waterfowl ecology concerns the relationship between the physiological condition of ducks during the nonbreeding season and fitness, i.e., survival and future reproductive success. We investigated this subject using direct band recovery records of mallards (Anas platyrhynchos) banded in autumn (1 Oct-15 Dec) 1981-83 in the Mississippi Alluvial Valley (MAV) [USA]. A condition index, weight (g)/wing length (mm), was calculated for each duck, and we tested whether condition of mallards at time of banding was related to their probability of recovery during the hunting season. In 3 years, 5,610 mallards were banded and there were 234 direct recoveries. Three binary regression model was used to test the relationship between recovery probability and condition. Likelihood-ratio tests were conducted to determine the most suitable model. For mallards banded in autumn there was a negative relationship between physical condition and the probability of recovery. Mallards in poor condition at the time of banding had a greater probability of being recovered during the hunting season. In general, this was true for all ages and sex classes; however, the strongest relationship occurred for adult males.
Diagnostic Accuracy of Tests for Polyuria in Lithium-Treated Patients.
Kinahan, James Conor; NiChorcorain, Aoife; Cunningham, Sean; Freyne, Aideen; Cooney, Colm; Barry, Siobhan; Kelly, Brendan D
2015-08-01
In lithium-treated patients, polyuria increases the risk of dehydration and lithium toxicity. If detected early, it is reversible. Despite its prevalence and associated morbidity in clinical practice, it remains underrecognized and therefore undertreated. The 24-hour urine collection is limited by its convenience and practicality. This study explores the diagnostic accuracy of alternative tests such as questionnaires on subjective polyuria, polydipsia, nocturia (dichotomous and ordinal responses), early morning urine sample osmolality (EMUO), and fluid intake record (FIR). This is a cross-sectional study of 179 lithium-treated patients attending a general adult and an old age psychiatry service. Participants completed the tests after completing an accurate 24-hour urine collection. The diagnostic accuracy of the individual tests was explored using the appropriate statistical techniques. Seventy-nine participants completed all of the tests. Polydipsia severity, EMUO, and FIR significantly differentiated the participants with polyuria (area under the receiver operating characteristic curve of 0.646, 0.760, and 0.846, respectively). Of the tests investigated, the FIR made the largest significant change in the probability that a patient experiences polyuria (<2000 mL/24 hours; interval likelihood ratio, 0.18 and >3500 mL/24 hours; interval likelihood ratio, 14). Symptomatic questioning, EMUO, and an FIR could be used in clinical practice to inform the prescriber of the probability that a lithium-treated patient is experiencing polyuria.
Pimentel, Mark; Purdy, Chris; Magar, Raf; Rezaie, Ali
2016-07-01
A high incidence of irritable bowel syndrome (IBS) is associated with significant medical costs. Diarrhea-predominant IBS (IBS-D) is diagnosed on the basis of clinical presentation and diagnostic test results and procedures that exclude other conditions. This study was conducted to estimate the potential cost savings of a novel IBS diagnostic blood panel that tests for the presence of antibodies to cytolethal distending toxin B and anti-vinculin associated with IBS-D. A cost-minimization (CM) decision tree model was used to compare the costs of a novel IBS diagnostic blood panel pathway versus an exclusionary diagnostic pathway (ie, standard of care). The probability that patients proceed to treatment was modeled as a function of sensitivity, specificity, and likelihood ratios of the individual biomarker tests. One-way sensitivity analyses were performed for key variables, and a break-even analysis was performed for the pretest probability of IBS-D. Budget impact analysis of the CM model was extrapolated to a health plan with 1 million covered lives. The CM model (base-case) predicted $509 cost savings for the novel IBS diagnostic blood panel versus the exclusionary diagnostic pathway because of the avoidance of downstream testing (eg, colonoscopy, computed tomography scans). Sensitivity analysis indicated that an increase in both positive likelihood ratios modestly increased cost savings. Break-even analysis estimated that the pretest probability of disease would be 0.451 to attain cost neutrality. The budget impact analysis predicted a cost savings of $3,634,006 ($0.30 per member per month). The novel IBS diagnostic blood panel may yield significant cost savings by allowing patients to proceed to treatment earlier, thereby avoiding unnecessary testing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Perlis, Roy H.; Patrick, Amanda; Smoller, Jordan W.; Wang, Philip S.
2009-01-01
The potential of personalized medicine to transform the treatment of mood disorders has been widely touted in psychiatry, but has not been quantified. We estimated the costs and benefits of a putative pharmacogenetic test for antidepressant response in the treatment of major depressive disorder (MDD) from the societal perspective. Specifically, we performed cost-effectiveness analyses using state-transition probability models incorporating probabilities from the multicenter STAR*D effectiveness study of MDD. Costs and quality-adjusted life years were compared for sequential antidepressant trials, with or without guidance from a pharmacogenetic test for differential response to selective serotonin reuptake inhibitors (SSRIs). Likely SSRI responders received an SSRI, while likely nonresponders received the norepinephrine/dopamine reuptake inhibitor bupropion. For a 40-year-old with major depressive disorder, applying the pharmacogenetic test and using the non-SSRI bupropion for those at higher risk for nonresponse cost $93,520 per additional quality-adjusted life-year (QALY) compared with treating all patients with an SSRI first and switching sequentially in the case of nonremission. Cost/QALY dropped below $50,000 for tests with remission rate ratios as low as 1.5, corresponding to odds ratios ~1.8–2.0. Tests for differential antidepressant response could thus become cost-effective under certain circumstances. These circumstances, particularly availability of alternative treatment strategies and test effect sizes, can be estimated and should be considered before these tests are broadly applied in clinical settings. PMID:19494805
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
Reibnegger, Gilbert; Caluba, Hans-Christian; Ithaler, Daniel; Manhal, Simone; Neges, Heide Maria; Smolle, Josef
2011-08-01
Admission to medical studies in Austria since academic year 2005-2006 has been regulated by admission tests. At the Medical University of Graz, an admission test focusing on secondary-school-level knowledge in natural sciences has been used for this purpose. The impact of this important change on dropout rates of female versus male students and older versus younger students is reported. All 2,860 students admitted to the human medicine diploma program at the Medical University of Graz from academic years 2002-2003 to 2008-2009 were included. Nonparametric and semiparametric survival analysis techniques were employed to compare cumulative probability of dropout between demographic groups. Cumulative probability of dropout was significantly reduced in students selected by active admission procedure versus those admitted openly (P < .0001). Relative hazard ratio of selected versus openly admitted students was only 0.145 (95% CI, 0.106-0.198). Among openly admitted students, but not for selected ones, the cumulative probabilities for dropout were higher for females (P < .0001) and for older students (P < .0001). Generally, dropout hazard is highest during the second year of study. The introduction of admission testing significantly decreased the cumulative probability for dropout. In openly admitted students a significantly higher risk for dropout was found in female students and in older students, whereas no such effects can be detected after admission testing. Future research should focus on the sex dependence, with the aim of improving success rates among female applicants on the admission tests.
Kealey, S M; Dodd, J D; MacEneaney, P M; Gibney, R G; Malone, D E
2004-01-01
To evaluate the efficacy of minimal preparation computed tomography (MPCT) in diagnosing clinically significant colonic tumours in frail, elderly patients. A prospective study was performed in a group of consecutively referred, frail, elderly patients with symptoms or signs of anaemia, pain, rectal bleeding or weight loss. The MPCT protocol consisted of 1.5 l Gastrografin 1% diluted with sterile water administered during the 48 h before the procedure with no bowel preparation or administration of intravenous contrast medium. Eight millimetre contiguous scans through the abdomen and pelvis were performed. The scans were double-reported by two gastrointestinal radiologists as showing definite (>90% certain), probable (50-90% certain), possible (<50% certain) neoplasm or normal. Where observers disagreed the more pessimistic of the two reports was accepted. The gold standard was clinical outcome at 1 year with positive end-points defined as (1) histological confirmation of CRC, (2) clinical presentation consistent with CRC without histological confirmation if the patient was too unwell for biopsy/surgery, and (3) death directly attributable to colorectal carcinoma (CRC) with/without post-mortem confirmation. Negative end-points were defined as patients with no clinical, radiological or post-mortem findings of CRC. Patients were followed for 1 year or until one of the above end-points were met. Seventy-two patients were included (mean age 81; range 62-93). One-year follow-up was completed in 94.4% (n=68). Mortality from all causes was 33% (n=24). Five histologically proven tumours were diagnosed with CT and there were two probable false-negatives. Results were analysed twice: assuming all CT lesions test positive and considering "possible" lesions test negative [brackets] (95% confidence intervals): sensitivity 0.88 (0.47-1.0) [0.75 (0.35-0.97)], specificity 0.47 (0.34-0.6) [0.87 (0.75-0.94)], positive predictive value 0.18 [0.43], negative predictive value 0.97 [0.96], positive likelihood ratio result 1.6 [5.63], negative likelihood ratio result 0.27 [0.29], kappa 0.31 [0.43]. Tumour prevalence was 12%. A graph of conditional probabilities was generated and analysed. A variety of unsuspected pathology was also found in this series of patients. MPCT should be double-reported, at least initially. "Possible" lesions should be ignored. Analysis of the graph of conditional probability applied to a group of frail, elderly patients with a high mortality from all causes (33% in our study) suggests: (1) if MPCT suggests definite or probable carcinoma, regardless of the pre-test probability, the post-test probability is high enough to warrant further action, (2) frail, elderly patients with a low pre-test probability for CRC and a negative MPCT should not have further investigation, (3) frail, elderly patients with a higher pre-test probability of CRC (such as those presenting with rectal bleeding) and a negative MPCT should have either double contrast barium enema (DCBE) or colonoscopy as further investigations or be followed clinically for 3-6 months. MPCT was acceptable to patients and clinicians and may reveal significant extra-colonic pathology.
Brand, Matthias; Schiebener, Johannes; Pertl, Marie-Theres; Delazer, Margarete
2014-01-01
Recent models on decision making under risk conditions have suggested that numerical abilities are important ingredients of advantageous decision-making performance, but empirical evidence is still limited. The results of our first study show that logical reasoning and basic mental calculation capacities predict ratio processing and that ratio processing predicts decision making under risk. In the second study, logical reasoning together with executive functions predicted probability processing (numeracy and probability knowledge), and probability processing predicted decision making under risk. These findings suggest that increasing an individual's understanding of ratios and probabilities should lead to more advantageous decisions under risk conditions.
From reading numbers to seeing ratios: a benefit of icons for risk comprehension.
Tubau, Elisabet; Rodríguez-Ferreiro, Javier; Barberia, Itxaso; Colomé, Àngels
2018-06-21
Promoting a better understanding of statistical data is becoming increasingly important for improving risk comprehension and decision-making. In this regard, previous studies on Bayesian problem solving have shown that iconic representations help infer frequencies in sets and subsets. Nevertheless, the mechanisms by which icons enhance performance remain unclear. Here, we tested the hypothesis that the benefit offered by icon arrays lies in a better alignment between presented and requested relationships, which should facilitate the comprehension of the requested ratio beyond the represented quantities. To this end, we analyzed individual risk estimates based on data presented either in standard verbal presentations (percentages and natural frequency formats) or as icon arrays. Compared to the other formats, icons led to estimates that were more accurate, and importantly, promoted the use of equivalent expressions for the requested probability. Furthermore, whereas the accuracy of the estimates based on verbal formats depended on their alignment with the text, all the estimates based on icons were equally accurate. Therefore, these results support the proposal that icons enhance the comprehension of the ratio and its mapping onto the requested probability and point to relational misalignment as potential interference for text-based Bayesian reasoning. The present findings also argue against an intrinsic difficulty with understanding single-event probabilities.
Naess, Are; Nilssen, Siri Saervold; Mo, Reidun; Eide, Geir Egil; Sjursen, Haakon
2017-06-01
To study the role of the neutrophil:lymphocyte ratio (NLR) and monocyte:lymphocyte ratio (MLR) in discriminating between different patient groups hospitalized for fever due to infection and those without infection. For 299 patients admitted to hospital for fever with unknown cause, a number of characteristics including NLR and MLR were recorded. These characteristics were used in a multiple multinomial regression analysis to estimate the probability of a final diagnostic group of bacterial, viral, clinically confirmed, or no infection. Both NLR and MLR significantly predicted final diagnostic group. Being highly correlated, however, both variables could not be retained in the same model. Both variables also interacted significantly with duration of fever. Generally, higher values of NLR and MLR indicated larger probabilities for bacterial infection and low probabilities for viral infection. Patients with septicemia had significantly higher NLR compared to patients with other bacterial infections with fever for less than one week. White blood cell counts, neutrophil counts, and C-reactive proteins did not differ significantly between septicemia and the other bacterial infection groups. NLR is a more useful diagnostic tool to identify patients with septicemia than other more commonly used diagnostic blood tests. NLR and MLR may be useful in the diagnosis of bacterial infection among patients hospitalized for fever.
Tables of stark level transition probabilities and branching ratios in hydrogen-like atoms
NASA Technical Reports Server (NTRS)
Omidvar, K.
1980-01-01
The transition probabilities which are given in terms of n prime k prime and n k are tabulated. No additional summing or averaging is necessary. The electric quantum number k plays the role of the angular momentum quantum number l in the presence of an electric field. The branching ratios between stark levels are also tabulated. Necessary formulas for the transition probabilities and branching ratios are given. Symmetries are discussed and selection rules are given. Some disagreements for some branching ratios are found between the present calculation and the measurement of Mark and Wierl. The transition probability multiplied by the statistical weight of the initial state is called the static intensity J sub S, while the branching ratios are called the dynamic intensity J sub D.
Bogaert, Anthony F
2005-02-01
One line of research on the etiology of sexual orientation has examined sibling sex ratio, the ratio of brothers to sisters collectively reported by a group of individuals, but this research has only used clinical and/or convenience samples. In the present study, homosexual men and women's sibling sex ratio was examined in two national probability samples. Results indicated that homosexual men had a sex ratio of 129.54 male live births to 100 female live births. This ratio was within the range of elevated sex ratios found in some previous studies of homosexual men, although it was only marginally significant (p = .09) relative to the known human sex ratio with regard to live births. Additional analyses indicated that this effect was likely the result of a high fraternal birth order (i.e., an elevated number of older brothers) in homosexual men. The sibling sex ratio for lesbians was 122.58 male live births to 100 female live births, which did not significantly differ from the known human sex ratio with regard to live births. The results for lesbians, however, should be interpreted with caution because the sample size (and resulting power) was low. The results in men add to research suggesting that homosexual men, unselected for gender identity or gender role behavior, do not have elevated sibling sex ratios. These results also suggest that research should concentrate on finding the cause(s) of the fraternal birth order effect, the consistent finding that homosexual men have an elevated number of older brothers.
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
Evaluating impacts using a BACI design, ratios, and a Bayesian approach with a focus on restoration.
Conner, Mary M; Saunders, W Carl; Bouwes, Nicolaas; Jordan, Chris
2015-10-01
Before-after-control-impact (BACI) designs are an effective method to evaluate natural and human-induced perturbations on ecological variables when treatment sites cannot be randomly chosen. While effect sizes of interest can be tested with frequentist methods, using Bayesian Markov chain Monte Carlo (MCMC) sampling methods, probabilities of effect sizes, such as a ≥20 % increase in density after restoration, can be directly estimated. Although BACI and Bayesian methods are used widely for assessing natural and human-induced impacts for field experiments, the application of hierarchal Bayesian modeling with MCMC sampling to BACI designs is less common. Here, we combine these approaches and extend the typical presentation of results with an easy to interpret ratio, which provides an answer to the main study question-"How much impact did a management action or natural perturbation have?" As an example of this approach, we evaluate the impact of a restoration project, which implemented beaver dam analogs, on survival and density of juvenile steelhead. Results indicated the probabilities of a ≥30 % increase were high for survival and density after the dams were installed, 0.88 and 0.99, respectively, while probabilities for a higher increase of ≥50 % were variable, 0.17 and 0.82, respectively. This approach demonstrates a useful extension of Bayesian methods that can easily be generalized to other study designs from simple (e.g., single factor ANOVA, paired t test) to more complicated block designs (e.g., crossover, split-plot). This approach is valuable for estimating the probabilities of restoration impacts or other management actions.
Expert system for online surveillance of nuclear reactor coolant pumps
Gross, Kenny C.; Singer, Ralph M.; Humenik, Keith E.
1993-01-01
An expert system for online surveillance of nuclear reactor coolant pumps. This system provides a means for early detection of pump or sensor degradation. Degradation is determined through the use of a statistical analysis technique, sequential probability ratio test, applied to information from several sensors which are responsive to differing physical parameters. The results of sequential testing of the data provide the operator with an early warning of possible sensor or pump failure.
Bivariate categorical data analysis using normal linear conditional multinomial probability model.
Sun, Bingrui; Sutradhar, Brajendra
2015-02-10
Bivariate multinomial data such as the left and right eyes retinopathy status data are analyzed either by using a joint bivariate probability model or by exploiting certain odds ratio-based association models. However, the joint bivariate probability model yields marginal probabilities, which are complicated functions of marginal and association parameters for both variables, and the odds ratio-based association model treats the odds ratios involved in the joint probabilities as 'working' parameters, which are consequently estimated through certain arbitrary 'working' regression models. Also, this later odds ratio-based model does not provide any easy interpretations of the correlations between two categorical variables. On the basis of pre-specified marginal probabilities, in this paper, we develop a bivariate normal type linear conditional multinomial probability model to understand the correlations between two categorical variables. The parameters involved in the model are consistently estimated using the optimal likelihood and generalized quasi-likelihood approaches. The proposed model and the inferences are illustrated through an intensive simulation study as well as an analysis of the well-known Wisconsin Diabetic Retinopathy status data. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Baker, L. R., Jr.; Tevepaugh, J. A.; Penny, M. M.
1973-01-01
Variations of nozzle performance characteristics of the model nozzles used in the Space Shuttle IA12B, IA12C, IA36 power-on launch vehicle test series are shown by comparison between experimental and analytical data. The experimental data are nozzle wall pressure distributions and schlieren photographs of the exhaust plume shapes. The exhaust plume shapes were simulated experimentally with cold flow while the analytical data were generated using a method-of-characteristics solution. Exhaust plume boundaries, boundary shockwave locations and nozzle wall pressure measurements calculated analytically agree favorably with the experimental data from the IA12C and IA36 test series. For the IA12B test series condensation was suspected in the exhaust plumes at the higher pressure ratios required to simulate the prototype plume shapes. Nozzle calibration tests for the series were conducted at pressure ratios where condensation either did not occur or if present did not produce a noticeable effect on the plume shapes. However, at the pressure ratios required in the power-on launch vehicle tests condensation probably occurs and could significantly affect the exhaust plume shapes.
Cost-Effectiveness of Opt-Out Chlamydia Testing for High-Risk Young Women in the U.S.
Owusu-Edusei, Kwame; Hoover, Karen W; Gift, Thomas L
2016-08-01
In spite of chlamydia screening recommendations, U.S. testing coverage continues to be low. This study explored the cost-effectiveness of a patient-directed, universal, opportunistic Opt-Out Testing strategy (based on insurance coverage, healthcare utilization, and test acceptance probabilities) for all women aged 15-24 years compared with current Risk-Based Screening (30% coverage) from a societal perspective. Based on insurance coverage (80%); healthcare utilization (83%); and test acceptance (75%), the proposed Opt-Out Testing strategy would have an expected annual testing coverage of approximately 50% for sexually active women aged 15-24 years. A basic compartmental heterosexual transmission model was developed to account for population-level transmission dynamics. Two groups were assumed based on self-reported sexual activity. All model parameters were obtained from the literature. Costs and benefits were tracked over a 50-year period. The relative sensitivity of the estimated incremental cost-effectiveness ratios to the variables/parameters was determined. This study was conducted in 2014-2015. Based on the model, the Opt-Out Testing strategy decreased the overall chlamydia prevalence by >55% (2.7% to 1.2%). The Opt-Out Testing strategy was cost saving compared with the current Risk-Based Screening strategy. The estimated incremental cost-effectiveness ratio was most sensitive to the female pre-opt out prevalence, followed by the probability of female sequelae and discount rate. The proposed Opt-Out Testing strategy was cost saving, improving health outcomes at a lower net cost than current testing. However, testing gaps would remain because many women might not have health insurance coverage, or not utilize health care. Published by Elsevier Inc.
A Computer-Aided Diagnosis System for Breast Cancer Combining Mammography and Proteomics
2007-05-01
findings in both Data sets C and M. The likelihood ratio is the probability of the features un- der the malignant case divided by the probability of...likelihood ratio value as a classification decision variable, the probabilities of detection and false alarm are calculated as follows: Pdfusion...lowered the fused classifier’s performance to near chance levels. A genetic algorithm searched over the likelihood- ratio thresh- old values for each
Timmermans, Luc; Falez, Freddy; Mélot, Christian; Wespes, Eric
2013-09-01
A urinary incontinence impairment rating must be a highly accurate, non-invasive exploration of the condition using International Classification of Functioning (ICF)-based assessment tools. The objective of this study was to identify the best evaluation test and to determine an impairment rating model of urinary incontinence. In performing a cross-sectional study comparing successive urodynamic tests using both the International Consultation on Incontinence Questionnaire-Urinary Incontinence-Short Form (ICIQ-UI-SF) and the 1-hr pad-weighing test in 120 patients, we performed statistical likelihood ratio analysis and used logistic regression to calculate the probability of urodynamic incontinence using the most significant independent predictors. Subsequently, we created a template that was based on the significant predictors and the probability of urodynamic incontinence. The mean ICIQ-UI-SF score was 13.5 ± 4.6, and the median pad test value was 8 g. The discrimination statistic (receiver operating characteristic) described how well the urodynamic observations matched the ICIQ-UI-SF scores (under curve area (UDA):0.689) and the pad test data (UDA: 0.693). Using logistic regression analysis, we demonstrated that the best independent predictors of urodynamic incontinence were the patient's age and the ICIQ-UI-SF score. The logistic regression model permitted us to construct an equation to determine the probability of urodynamic incontinence. Using these tools, we created a template to generate a probability index of urodynamic urinary incontinence. Using this probability index, relative to the patient and to the maximum impairment of the whole person (MIWP) relative to urinary incontinence, we were able to calculate a patient's permanent impairment. Copyright © 2012 Wiley Periodicals, Inc.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Santi, Terri
This book contains a classroom-tested approach to the teaching of problem solving to all students in Grades 4-6, regardless of ability. Information on problem solving in general is provided, then mathematical problems on logic, whole numbers, number theory, fractions, decimals, geometry, ratio, proportion, percent, probability, sets, and…
Quantifying radionuclide signatures from a γ-γ coincidence system.
Britton, Richard; Jackson, Mark J; Davies, Ashley V
2015-11-01
A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ-γ system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ-γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
1994-03-01
labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would
Cronin, Paul; Dwamena, Ben A
2018-05-01
This study aimed to calculate the multiple-level likelihood ratios (LRs) and posttest probabilities for a positive, indeterminate, or negative test result for multidetector computed tomography pulmonary angiography (MDCTPA) ± computed tomography venography (CTV) and magnetic resonance pulmonary angiography (MRPA) ± magnetic resonance venography (MRV) for each clinical probability level (two-, three-, and four-level) for the nine most commonly used clinical prediction rules (CPRs) (Wells, Geneva, Miniati, and Charlotte). The study design is a review of observational studies with critical review of multiple cohort studies. The settings are acute care, emergency room care, and ambulatory care (inpatients and outpatients). Data were used to estimate pulmonary embolism (PE) pretest probability for each of the most commonly used CPRs at each probability level. Multiple-level LRs (positive, indeterminate, negative test) were generated and used to calculate posttest probabilities for MDCTPA, MDCTPA + CTV, MRPA, and MRPA + MRV from sensitivity and specificity results from Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED) II and PIOPED III for each clinical probability level for each CPR. Nomograms were also created. The LRs for a positive test result were higher for MRPA compared to MDCTPA without venography (76 vs 20) and with venography (42 vs 18). LRs for a negative test result were lower for MDCTPA compared to MRPA without venography (0.18 vs 0.22) and with venography (0.12 vs 0.15). In the three-level Wells score, the pretest clinical probability of PE for a low, moderate, and high clinical probability score is 5.7, 23, and 49. The posttest probability for an initially low clinical probability PE for a positive, indeterminate, and negative test result, respectively, for MDCTPA is 54, 5 and 1; for MDCTPA + CTV is 52, 2, and 0.7; for MRPA is 82, 6, and 1; and for MRPA + MRV is 72, 3, and 1; for an initially moderate clinical probability PE for MDCTPA is 86, 22, and 5; for MDCTPA + CTV is 85, 10, and 4; for MRPA is 96, 25, and 6; and for MRPA + MRV is 93, 14, and 4; and for an initially high clinical probability of PE for MDCTPA is 95, 47, and 15; for MDCTPA + CTV is 95, 27, and 10; for MRPA is 99, 52, and 17; and for MRPA + MRV is 98, 34, and 13. For a positive test result, LRs were considerably higher for MRPA compared to MDCTPA. However, both a positive MRPA and MDCTPA have LRs >10 and therefore can confirm the presence of PE. Performing venography reduced the LR for a positive and negative test for both MDCTPA and MRPA. The nomograms give posttest probabilities for a positive, indeterminate, or negative test result for MDCTPA and MRPA (with and without venography) for each clinical probability level for each of the CPR. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Triglycerides and glucose index: a useful indicator of insulin resistance.
Unger, Gisela; Benozzi, Silvia Fabiana; Perruzza, Fernando; Pennacchiotti, Graciela Laura
2014-12-01
Insulin resistance assessment requires sophisticated methodology of difficult application. Therefore, different estimators for this condition have been suggested. The aim of this study was to evaluate the triglycerides and glucose (TyG) index as a marker of insulin resistance and to compare it to the triglycerides/HDL cholesterol ratio (TG/HDL-C), in subjects with and without metabolic syndrome (MS). An observational, cross-sectional study was conducted on 525 adults of a population from Bahia Blanca, Argentina, who were divided into two groups: with MS (n=89) and without MS (n=436). The discriminating capacities for MS of the TyG index, calculated as Ln (TG [mg/dL] x glucose [mg/dL]/2), and the TG/HDL-C ratio were evaluated. Pre-test probability for MS was 30%. The mean value of the TyG index was higher in the group with MS as compared to the group without MS and its correlation with the TG/HDL-C ratio was good. The cut-off values for MS in the overall population were 8.8 for the TyG index (sensitivity=79%, specificity=86%), and 2.4 for the TG/HDL-C ratio (sensitivity=88%, specificity=72%). The positive likelihood ratios and post-test probabilities for these parameters were 5.8 vs 3.1 and 72% vs 58% respectively. The cut-off point for the TyG index was 8.8 in men and 8.7 in women; the respective values for TG/C-HDL were 3.1 in men and 2.2 in women. The TyG index was a good discriminant of MS. Its simple calculation warrants its further study as an alternative marker of insulin resistance. Copyright © 2014 SEEN. Published by Elsevier Espana. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-01-01
... that the facts that caused the deficient share-asset ratio no longer exist; and (ii) The likelihood of further depreciation of the share-asset ratio is not probable; and (iii) The return of the share-asset ratio to its normal limits within a reasonable time for the credit union concerned is probable; and (iv...
Levesque, Barrett G; Cipriano, Lauren E; Chang, Steven L; Lee, Keane K; Owens, Douglas K; Garber, Alan M
2010-03-01
The cost effectiveness of alternative approaches to the diagnosis of small-bowel Crohn's disease is unknown. This study evaluates whether computed tomographic enterography (CTE) is a cost-effective alternative to small-bowel follow-through (SBFT) and whether capsule endoscopy is a cost-effective third test in patients in whom a high suspicion of disease remains after 2 previous negative tests. A decision-analytic model was developed to compare the lifetime costs and benefits of each diagnostic strategy. Patients were considered with low (20%) and high (75%) pretest probability of small-bowel Crohn's disease. Effectiveness was measured in quality-adjusted life-years (QALYs) gained. Parameter assumptions were tested with sensitivity analyses. With a moderate to high pretest probability of small-bowel Crohn's disease, and a higher likelihood of isolated jejunal disease, follow-up evaluation with CTE has an incremental cost-effectiveness ratio of less than $54,000/QALY-gained compared with SBFT. The addition of capsule endoscopy after ileocolonoscopy and negative CTE or SBFT costs greater than $500,000 per QALY-gained in all scenarios. Results were not sensitive to costs of tests or complications but were sensitive to test accuracies. The cost effectiveness of strategies depends critically on the pretest probability of Crohn's disease and if the terminal ileum is examined at ileocolonoscopy. CTE is a cost-effective alternative to SBFT in patients with moderate to high suspicion of small-bowel Crohn's disease. The addition of capsule endoscopy as a third test is not a cost-effective third test, even in patients with high pretest probability of disease. Copyright 2010 AGA Institute. Published by Elsevier Inc. All rights reserved.
Xu, Lan; Verdoodt, Freija; Wentzensen, Nicolas; Bergeron, Christine; Arbyn, Marc
2015-01-01
Background Women with a cytological diagnosis of atypical squamous cells, cannot exclude high-grade squamous intraepithelial lesion (ASC-H) are usually immediately referred to colposcopy. However, triage may reduce the burden of diagnostic work-up and avoid over-treatment. Methods A meta-analysis was conducted to assess the accuracy of hrHPV testing, and testing for other molecular markers to detect CIN of grade II or III or worse (CIN2+ or CIN3+) in women with ASC-H. An additional question assessed was whether triage is useful given the relatively high pre-triage probability of underlying precancer. Results The pooled absolute sensitivity and specificity for CIN2+ of HC2 (derived from 19 studies) was 93% (95% CI: 89–95%) and 45% (95% CI: 41–50%), respectively. The p16INK4a staining (only 3 studies) has similar sensitivity (93%, 95% CI:75–100%) but superior specificity (specificity ratio: 1.69) to HC2 for CIN2+. Testing for PAX1 gene methylation (only 1 study) showed a superior specificity of 95% (specificity ratio: 2.08). The average pre-test risk was 34% for CIN2+ and 20% for CIN3+. A negative HC2 result decreased this to 8% and 5%, whereas a positive result upgraded the risk to 47% and 28%. Conclusions Due to the high probability of precancer in ASC-H, the utility of triage is limited. The usual recommendation to refer women with ASC-H to colposcopy is not altered by a positive triage test, whatever the test used. A negative hrHPV DNA or p16INK4a test may allow for repeat testing but this recommendation will depend on local decision thresholds for referral. PMID:26618614
Jarmolowicz, David P; Sofis, Michael J; Darden, Alexandria C
2016-07-01
Although progressive ratio (PR) schedules have been used to explore effects of a range of reinforcer parameters (e.g., magnitude, delay), effects of reinforcer probability remain underexplored. The present project used independently progressing concurrent PR PR schedules to examine effects of reinforcer probability on PR breakpoint (highest completed ratio prior to a session terminating 300s pause) and response allocation. The probability of reinforcement on one lever remained at 100% across all conditions while the probability of reinforcement on the other lever was systematically manipulated (i.e., 100%, 50%, 25%, 12.5%, and a replication of 25%). Breakpoints systematically decreased with decreasing reinforcer probabilities while breakpoints on the control lever remained unchanged. Patterns of switching between the two levers were well described by a choice-by-choice unit price model that accounted for the hyperbolic discounting of the value of probabilistic reinforcers. Copyright © 2016 Elsevier B.V. All rights reserved.
Warkentin, Theodore E; Sheppard, Jo-Ann I; Linkins, Lori-Ann; Arnold, Donald M; Nazy, Ishac
2017-05-01
Heparin-induced thrombocytopenia (HIT) is a prothrombotic drug reaction caused by platelet-activating anti-PF4/heparin antibodies. Given time-sensitive treatment considerations, a rapid and accurate laboratory test for HIT antibodies is needed. To determine operating characteristics for the HemosIL ® HIT-Ab (PF4/H) , a rapid, on-demand, fully-automated, latex immunoturbidimetric assay (LIA), for diagnosis of HIT. We evaluated LIA sensitivity, specificity, negative (NPV) and positive predictive value (PPV), negative (LR-) and positive likelihood ratio (LR+), using citrated-plasma from 429 patients (prospective cohort study of 4Ts scoring; HIT, n=31), and from consecutive HIT patients (n=125), using reference standard serotonin-release assay (SRA). Comparators included two PF4-dependent enzyme-immunoassays (EIAs). We used stratum-specific likelihood ratios (SSLRs) to determine how differing magnitudes of LIA-positivity influenced post-test probability of HIT. LIA operating characteristics were: sensitivity=97.4% (152/156); specificity=94.0% (374/398); PPV=55.6% (30/54); and NPV=99.7% (374/375). At manufacturers' cutoffs, LIA specificity and PPV were superior to the EIAs. Although a negative LIA pointed strongly against HIT (LR-, 0.034), the post-test probability was ~2% with high 4Ts score. The LIA's LR+ was high (16.0), with SSLRs rising substantially with greater LIA-positivity: 5.7 (1.0-4.9U/mL), 31 (5.0-15.9U/mL), and 128 (≥16U/mL). A LIA-positive result (at 1.0 cutoff) indicated at least 24% HIT probability (low 4Ts score), rising to 90% with high 4Ts score. Although approximately 1 in 40 SRA-positive patients tested LIA-negative, the LIA's high NPV and PPV indicate that this rapid assay is useful for the diagnostic evaluation of HIT, including in low pre-test situations. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Papantonopoulos, Georgios; Takahashi, Keiso; Bountis, Tasos; Loos, Bruno G
2014-01-01
There is neither a single clinical, microbiological, histopathological or genetic test, nor combinations of them, to discriminate aggressive periodontitis (AgP) from chronic periodontitis (CP) patients. We aimed to estimate probability density functions of clinical and immunologic datasets derived from periodontitis patients and construct artificial neural networks (ANNs) to correctly classify patients into AgP or CP class. The fit of probability distributions on the datasets was tested by the Akaike information criterion (AIC). ANNs were trained by cross entropy (CE) values estimated between probabilities of showing certain levels of immunologic parameters and a reference mode probability proposed by kernel density estimation (KDE). The weight decay regularization parameter of the ANNs was determined by 10-fold cross-validation. Possible evidence for 2 clusters of patients on cross-sectional and longitudinal bone loss measurements were revealed by KDE. Two to 7 clusters were shown on datasets of CD4/CD8 ratio, CD3, monocyte, eosinophil, neutrophil and lymphocyte counts, IL-1, IL-2, IL-4, INF-γ and TNF-α level from monocytes, antibody levels against A. actinomycetemcomitans (A.a.) and P.gingivalis (P.g.). ANNs gave 90%-98% accuracy in classifying patients into either AgP or CP. The best overall prediction was given by an ANN with CE of monocyte, eosinophil, neutrophil counts and CD4/CD8 ratio as inputs. ANNs can be powerful in classifying periodontitis patients into AgP or CP, when fed by CE values based on KDE. Therefore ANNs can be employed for accurate diagnosis of AgP or CP by using relatively simple and conveniently obtained parameters, like leukocyte counts in peripheral blood. This will allow clinicians to better adapt specific treatment protocols for their AgP and CP patients.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Guyot, Patricia; Ades, A E; Ouwens, Mario J N M; Welton, Nicky J
2012-02-01
The results of Randomized Controlled Trials (RCTs) on time-to-event outcomes that are usually reported are median time to events and Cox Hazard Ratio. These do not constitute the sufficient statistics required for meta-analysis or cost-effectiveness analysis, and their use in secondary analyses requires strong assumptions that may not have been adequately tested. In order to enhance the quality of secondary data analyses, we propose a method which derives from the published Kaplan Meier survival curves a close approximation to the original individual patient time-to-event data from which they were generated. We develop an algorithm that maps from digitised curves back to KM data by finding numerical solutions to the inverted KM equations, using where available information on number of events and numbers at risk. The reproducibility and accuracy of survival probabilities, median survival times and hazard ratios based on reconstructed KM data was assessed by comparing published statistics (survival probabilities, medians and hazard ratios) with statistics based on repeated reconstructions by multiple observers. The validation exercise established there was no material systematic error and that there was a high degree of reproducibility for all statistics. Accuracy was excellent for survival probabilities and medians, for hazard ratios reasonable accuracy can only be obtained if at least numbers at risk or total number of events are reported. The algorithm is a reliable tool for meta-analysis and cost-effectiveness analyses of RCTs reporting time-to-event data. It is recommended that all RCTs should report information on numbers at risk and total number of events alongside KM curves.
St. Clair, Caryn; Norwitz, Errol R.; Woensdregt, Karlijn; Cackovic, Michael; Shaw, Julia A.; Malkus, Herbert; Ehrenkranz, Richard A.; Illuzzi, Jessica L.
2011-01-01
We sought to define the risk of neonatal respiratory distress syndrome (RDS) as a function of both lecithin/sphingomyelin (L/S) ratio and gestational age. Amniotic fluid L/S ratio data were collected from consecutive women undergoing amniocentesis for fetal lung maturity at Yale-New Haven Hospital from January 1998 to December 2004. Women were included in the study if they delivered a live-born, singleton, nonanomalous infant within 72 hours of amniocentesis. The probability of RDS was modeled using multivariate logistic regression with L/S ratio and gestational age as predictors. A total of 210 mother-neonate pairs (8 RDS, 202 non-RDS) met criteria for analysis. Both gestational age and L/S ratio were independent predictors of RDS. A probability of RDS of 3% or less was noted at an L/S ratio cutoff of ≥3.4 at 34 weeks, ≥2.6 at 36 weeks, ≥1.6 at 38 weeks, and ≥1.2 at term. Under 34 weeks of gestation, the prevalence of RDS was so high that a probability of 3% or less was not observed by this model. These data describe a means of stratifying the probability of neonatal RDS using both gestational age and the L/S ratio and may aid in clinical decision making concerning the timing of delivery. PMID:18773379
2014-01-01
Background The objective of this study was to perform a systematic review and a meta-analysis in order to estimate the diagnostic accuracy of diffusion weighted imaging (DWI) in the preoperative assessment of deep myometrial invasion in patients with endometrial carcinoma. Methods Studies evaluating DWI for the detection of deep myometrial invasion in patients with endometrial carcinoma were systematically searched for in the MEDLINE, EMBASE, and Cochrane Library from January 1995 to January 2014. Methodologic quality was assessed by using the Quality Assessment of Diagnostic Accuracy Studies tool. Bivariate random-effects meta-analytic methods were used to obtain pooled estimates of sensitivity, specificity, diagnostic odds ratio (DOR) and receiver operating characteristic (ROC) curves. The study also evaluated the clinical utility of DWI in preoperative assessment of deep myometrial invasion. Results Seven studies enrolling a total of 320 individuals met the study inclusion criteria. The summary area under the ROC curve was 0.91. There was no evidence of publication bias (P = 0.90, bias coefficient analysis). Sensitivity and specificity of DWI for detection of deep myometrial invasion across all studies were 0.90 and 0.89, respectively. Positive and negative likelihood ratios with DWI were 8 and 0.11 respectively. In patients with high pre-test probabilities, DWI enabled confirmation of deep myometrial invasion; in patients with low pre-test probabilities, DWI enabled exclusion of deep myometrial invasion. The worst case scenario (pre-test probability, 50%) post-test probabilities were 89% and 10% for positive and negative DWI results, respectively. Conclusion DWI has high sensitivity and specificity for detecting deep myometrial invasion and more importantly can reliably rule out deep myometrial invasion. Therefore, it would be worthwhile to add a DWI sequence to the standard MRI protocols in preoperative evaluation of endometrial cancer in order to detect deep myometrial invasion, which along with other poor prognostic factors like age, tumor grade, and LVSI would be useful in stratifying high risk groups thereby helping in the tailoring of surgical approach in patient with low risk of endometrial carcinoma. PMID:25608571
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
NASA Astrophysics Data System (ADS)
Jiang, Quan; Zhong, Shan; Cui, Jie; Feng, Xia-Ting; Song, Leibo
2016-12-01
We investigated the statistical characteristics and probability distribution of the mechanical parameters of natural rock using triaxial compression tests. Twenty cores of Jinping marble were tested under each different levels of confining stress (i.e., 5, 10, 20, 30, and 40 MPa). From these full stress-strain data, we summarized the numerical characteristics and determined the probability distribution form of several important mechanical parameters, including deformational parameters, characteristic strength, characteristic strains, and failure angle. The statistical proofs relating to the mechanical parameters of rock presented new information about the marble's probabilistic distribution characteristics. The normal and log-normal distributions were appropriate for describing random strengths of rock; the coefficients of variation of the peak strengths had no relationship to the confining stress; the only acceptable random distribution for both Young's elastic modulus and Poisson's ratio was the log-normal function; and the cohesive strength had a different probability distribution pattern than the frictional angle. The triaxial tests and statistical analysis also provided experimental evidence for deciding the minimum reliable number of experimental sample and for picking appropriate parameter distributions to use in reliability calculations for rock engineering.
ERIC Educational Resources Information Center
Santi, Terri
This book contains a classroom-tested approach to the teaching of problem solving to all students in Grades 6-8, regardless of ability. Information on problem solving in general is provided, then mathematical problems on logic, exponents, fractions, pre-algebra, algebra, geometry, number theory, set theory, ratio, proportion, percent, probability,…
Lake bed classification using acoustic data
Yin, Karen K.; Li, Xing; Bonde, John; Richards, Carl; Cholwek, Gary
1998-01-01
As part of our effort to identify the lake bed surficial substrates using remote sensing data, this work designs pattern classifiers by multivariate statistical methods. Probability distribution of the preprocessed acoustic signal is analyzed first. A confidence region approach is then adopted to improve the design of the existing classifier. A technique for further isolation is proposed which minimizes the expected loss from misclassification. The devices constructed are applicable for real-time lake bed categorization. A mimimax approach is suggested to treat more general cases where the a priori probability distribution of the substrate types is unknown. Comparison of the suggested methods with the traditional likelihood ratio tests is discussed.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Hall, Peter S; McCabe, Christopher; Stein, Robert C; Cameron, David
2012-01-04
Multi-parameter genomic tests identify patients with early-stage breast cancer who are likely to derive little benefit from adjuvant chemotherapy. These tests can potentially spare patients the morbidity from unnecessary chemotherapy and reduce costs. However, the costs of the test must be balanced against the health benefits and cost savings produced. This economic evaluation compared genomic test-directed chemotherapy using the Oncotype DX 21-gene assay with chemotherapy for all eligible patients with lymph node-positive, estrogen receptor-positive early-stage breast cancer. We performed a cost-utility analysis using a state transition model to calculate expected costs and benefits over the lifetime of a cohort of women with estrogen receptor-positive lymph node-positive breast cancer from a UK perspective. Recurrence rates for Oncotype DX-selected risk groups were derived from parametric survival models fitted to data from the Southwest Oncology Group 8814 trial. The primary outcome was the incremental cost-effectiveness ratio, expressed as the cost (in 2011 GBP) per quality-adjusted life-year (QALY). Confidence in the incremental cost-effectiveness ratio was expressed as a probability of cost-effectiveness and was calculated using Monte Carlo simulation. Model parameters were varied deterministically and probabilistically in sensitivity analysis. Value of information analysis was used to rank priorities for further research. The incremental cost-effectiveness ratio for Oncotype DX-directed chemotherapy using a recurrence score cutoff of 18 was £5529 (US $8852) per QALY. The probability that test-directed chemotherapy is cost-effective was 0.61 at a willingness-to-pay threshold of £30 000 per QALY. Results were sensitive to the recurrence rate, long-term anthracycline-related cardiac toxicity, quality of life, test cost, and the time horizon. The highest priority for further research identified by value of information analysis is the recurrence rate in test-selected subgroups. There is substantial uncertainty regarding the cost-effectiveness of Oncotype DX-directed chemotherapy. It is particularly important that future research studies to inform cost-effectiveness-based decisions collect long-term outcome data.
Health Professionals Prefer to Communicate Risk-Related Numerical Information Using "1-in-X" Ratios.
Sirota, Miroslav; Juanchich, Marie; Petrova, Dafina; Garcia-Retamero, Rocio; Walasek, Lukasz; Bhatia, Sudeep
2018-04-01
Previous research has shown that format effects, such as the "1-in-X" effect-whereby "1-in-X" ratios lead to a higher perceived probability than "N-in-N*X" ratios-alter perceptions of medical probabilities. We do not know, however, how prevalent this effect is in practice; i.e., how often health professionals use the "1-in-X" ratio. We assembled 4 different sources of evidence, involving experimental work and corpus studies, to examine the use of "1-in-X" and other numerical formats quantifying probability. Our results revealed that the use of the "1-in-X" ratio is prevalent and that health professionals prefer this format compared with other numerical formats (i.e., the "N-in-N*X", %, and decimal formats). In Study 1, UK family physicians preferred to communicate prenatal risk using a "1-in-X" ratio (80.4%, n = 131) across different risk levels and regardless of patients' numeracy levels. In Study 2, a sample from the UK adult population ( n = 203) reported that most GPs (60.6%) preferred to use "1-in-X" ratios compared with other formats. In Study 3, "1-in-X" ratios were the most commonly used format in a set of randomly sampled drug leaflets describing the risk of side effects (100%, n = 94). In Study 4, the "1-in-X" format was the most commonly used numerical expression of medical probabilities or frequencies on the UK's NHS website (45.7%, n = 2,469 sentences). The prevalent use of "1-in-X" ratios magnifies the chances of increased subjective probability. Further research should establish clinical significance of the "1-in-X" effect.
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
[How to screen for pheochromocytoma, primary aldosteronism and Cushing's syndrome].
Meyer, Patrick
2009-01-07
Pheochromocytoma, primary aldosteronism and Cushing's syndrome are uncommon disorders and are difficult to diagnose because laboratory tests lack validation and specificity. Despite these limitations, practice guidelines are proposed to standardize the screening procedure. The most reliable method to diagnose pheochromocytoma is the measurement of plasmatic and/or urinary metanephrines and normetanephrines depending on the pre-test probability of the disease. The approach for detection of primary aldosteronism is based on the aldosterone-renin ratio under standard conditions. Finally, three tests are available to establish the diagnosis of Cushing's syndrome: 24-h urinary free cortisol excretion, low-dose dexamethasone suppression test and the recent and promising late evening salivary cortisol.
Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context.
Tiwari, Aseem K; Pandey, Prashant K; Negi, Avinash; Bagga, Ruchika; Shanker, Ajay; Baveja, Usha; Vimarsh, Raina; Bhargava, Richa; Dara, Ravi C; Rawat, Ganesh
2015-01-01
Lab-diagnosis of hepatitis C virus (HCV) is based on detecting specific antibodies by enzyme immuno-assay (EIA) or chemiluminescence immuno-assay (CIA). Center for Disease Control reported that signal-to-cut-off (s/co) ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS(®) Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey) for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR) (Abbott). PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.
2013-01-01
Background Falls among the elderly are a major public health concern. Therefore, the possibility of a modeling technique which could better estimate fall probability is both timely and needed. Using biomedical, pharmacological and demographic variables as predictors, latent class analysis (LCA) is demonstrated as a tool for the prediction of falls among community dwelling elderly. Methods Using a retrospective data-set a two-step LCA modeling approach was employed. First, we looked for the optimal number of latent classes for the seven medical indicators, along with the patients’ prescription medication and three covariates (age, gender, and number of medications). Second, the appropriate latent class structure, with the covariates, were modeled on the distal outcome (fall/no fall). The default estimator was maximum likelihood with robust standard errors. The Pearson chi-square, likelihood ratio chi-square, BIC, Lo-Mendell-Rubin Adjusted Likelihood Ratio test and the bootstrap likelihood ratio test were used for model comparisons. Results A review of the model fit indices with covariates shows that a six-class solution was preferred. The predictive probability for latent classes ranged from 84% to 97%. Entropy, a measure of classification accuracy, was good at 90%. Specific prescription medications were found to strongly influence group membership. Conclusions In conclusion the LCA method was effective at finding relevant subgroups within a heterogenous at-risk population for falling. This study demonstrated that LCA offers researchers a valuable tool to model medical data. PMID:23705639
Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W
2017-08-01
The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.
Nakagawa, Yoshihide; Amino, Mari; Inokuchi, Sadaki; Hayashi, Satoshi; Wakabayashi, Tsutomu; Noda, Tatsuya
2017-04-01
Amplitude spectral area (AMSA), an index for analysing ventricular fibrillation (VF) waveforms, is thought to predict the return of spontaneous circulation (ROSC) after electric shocks, but its validity is unconfirmed. We developed an equation to predict ROSC, where the change in AMSA (ΔAMSA) is added to AMSA measured immediately before the first shock (AMSA1). We examine the validity of this equation by comparing it with the conventional AMSA1-only equation. We retrospectively investigated 285 VF patients given prehospital electric shocks by emergency medical services. ΔAMSA was calculated by subtracting AMSA1 from last AMSA immediately before the last prehospital electric shock. Multivariate logistic regression analysis was performed using post-shock ROSC as a dependent variable. Analysis data were subjected to receiver operating characteristic curve analysis, goodness-of-fit testing using a likelihood ratio test, and the bootstrap method. AMSA1 (odds ratio (OR) 1.151, 95% confidence interval (CI) 1.086-1.220) and ΔAMSA (OR 1.289, 95% CI 1.156-1.438) were independent factors influencing ROSC induction by electric shock. Area under the curve (AUC) for predicting ROSC was 0.851 for AMSA1-only and 0.891 for AMSA1+ΔAMSA. Compared with the AMSA1-only equation, the AMSA1+ΔAMSA equation had significantly better goodness-of-fit (likelihood ratio test P<0.001) and showed good fit in the bootstrap method. Post-shock ROSC was accurately predicted by adding ΔAMSA to AMSA1. AMSA-based ROSC prediction enables application of electric shock to only those patients with high probability of ROSC, instead of interrupting chest compressions and delivering unnecessary shocks to patients with low probability of ROSC. Copyright © 2017 Elsevier B.V. All rights reserved.
Rosas, Samuel; Krill, Michael K; Amoo-Achampong, Kelms; Kwon, KiHyun; Nwachukwu, Benedict U; McCormick, Frank
2017-08-01
Clinical examination of the shoulder joint has gained attention as clinicians aim to use an evidence-based examination of the biceps tendon, with the desire for a proper diagnosis while minimizing costly imaging procedures. The purpose of this study is to create a decision tree analysis that enables the development of a clinical algorithm for diagnosing long head of biceps (LHB) pathology. A literature review of Level I and II diagnostic studies was conducted to extract characteristics of clinical tests for LHB pathology through a systematic review of PubMed, Medline, Ovid, and Cochrane Review databases. Tests were combined in series and parallel to determine sensitivities and specificities, and positive and negative likelihood ratios were determined for each combination using a subjective pretest probability. The "gold standard" for diagnosis in all included studies was arthroscopy or arthrotomy. The optimal testing modality was use of the uppercut test combined with the tenderness to palpation of the biceps tendon test. This combination achieved a sensitivity of 88.4% when performed in parallel and a specificity of 93.8% when performed in series. These tests used in combination optimize post-test probability accuracy greater than any single individual test. Performing the uppercut test and biceps groove tenderness to palpation test together has the highest sensitivity and specificity of known physical examinations maneuvers to aid in the diagnosis of LHB pathology compared with diagnostic arthroscopy (practical, evidence-based, comprehensive examination). A decision tree analysis aides in the practical, evidence-based, comprehensive examination diagnostic accuracy post-testing based on the ordinal scale pretest probability. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Can target-to-pons ratio be used as a reliable method for the analysis of [11C]PIB brain scans?
Edison, P; Hinz, R; Ramlackhansingh, A; Thomas, J; Gelosa, G; Archer, H A; Turkheimer, F E; Brooks, D J
2012-04-15
(11)C]PIB is the most widely used PET imaging marker for amyloid in dementia studies. In the majority of studies the cerebellum has been used as a reference region. However, cerebellar amyloid may be present in genetic Alzheimer's (AD), cerebral amyloid angiopathy and prion diseases. Therefore, we investigated whether the pons could be used as an alternative reference region for the analysis of [(11)C]PIB binding in AD. The aims of the study were to: 1) Evaluate the pons as a reference region using arterial plasma input function and Logan graphical analysis of binding. 2) Assess the power of target-to-pons ratios to discriminate controls from AD subjects. 3) Determine the test-retest reliability in AD subjects. 4) Demonstrate the application of target-to-pons ratio in subjects with elevated cerebellar [(11)C]PIB binding. 12 sporadic AD subjects aged 65 ± 4.5 yrs with a mean MMSE 21.4 ± 4 and 10 age-matched control subjects had [(11)C]PIB PET with arterial blood sampling. Three additional subjects (two subjects with pre-symptomatic presenilin-1 mutation carriers and one probable familial AD) were also studied. Object maps were created by segmenting individual MRIs and spatially transforming the gray matter images into standard stereotaxic MNI space and then superimposing a probabilistic atlas. Cortical [(11)C]PIB binding was assessed with an ROI (region of interest) analysis. Parametric maps of the volume of distribution (V(T)) were generated with Logan analysis. Additionally, parametric maps of the 60-90 min target-to-cerebellar ratio (RATIO(CER)) and the 60-90 min target-to-pons ratio (RATIO(PONS)) were computed. All three approaches were able to differentiate AD from controls (p<0.0001, nonparametric Wilcoxon rank sum test) in the target regions with RATIO(CER) and RATIO(PONS) differences higher than V(T) with use of an arterial input function. All methods had a good reproducibility (intraclass correlation coefficient>0.83); RATIO(CER) performed best closely followed by RATIO(PONS). The two subjects with presenilin-1 mutations and the probable familial AD case showed no significant differences in cortical binding using RATIO(CER), but the RATIO(PONS) approach revealed higher [(11)C]PIB binding in cortex and cerebellum. This study established 60-90 min target-to-pons RATIOs as a reliable method of analysis in [(11)C]PIB PET studies where cerebellum is not an appropriate reference region. Copyright © 2012 Elsevier Inc. All rights reserved.
Maximum Likelihood Analysis in the PEN Experiment
NASA Astrophysics Data System (ADS)
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
NASA Technical Reports Server (NTRS)
Wing, David J.
1994-01-01
A static investigation was conducted in the static test facility of the Langley 16-Foot Transonic Tunnel of two thrust-vectoring concepts which utilize fluidic mechanisms for deflecting the jet of a two-dimensional convergent-divergent nozzle. One concept involved using the Coanda effect to turn a sheet of injected secondary air along a curved sidewall flap and, through entrainment, draw the primary jet in the same direction to produce yaw thrust vectoring. The other concept involved deflecting the primary jet to produce pitch thrust vectoring by injecting secondary air through a transverse slot in the divergent flap, creating an oblique shock in the divergent channel. Utilizing the Coanda effect to produce yaw thrust vectoring was largely unsuccessful. Small vector angles were produced at low primary nozzle pressure ratios, probably because the momentum of the primary jet was low. Significant pitch thrust vector angles were produced by injecting secondary flow through a slot in the divergent flap. Thrust vector angle decreased with increasing nozzle pressure ratio but moderate levels were maintained at the highest nozzle pressure ratio tested. Thrust performance generally increased at low nozzle pressure ratios and decreased near the design pressure ratio with the addition of secondary flow.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Prudhomme O'Meara, Wendy; Mohanan, Manoj; Laktabai, Jeremiah; Lesser, Adriane; Platt, Alyssa; Maffioli, Elisa; Turner, Elizabeth L; Menya, Diana
2016-01-01
Objectives There is an urgent need to understand how to improve targeting of artemisinin combination therapy (ACT) to patients with confirmed malaria infection, including subsidised ACTs sold over-the-counter. We hypothesised that offering an antimalarial subsidy conditional on a positive malaria rapid diagnostic test (RDT) would increase uptake of testing and improve rational use of ACTs. Methods We designed a 2×2 factorial randomised experiment evaluating 2 levels of subsidy for RDTs and ACTs. Between July 2014 and June 2015, 444 individuals with a malaria-like illness who had not sought treatment were recruited from their homes. We used scratch cards to allocate participants into 4 groups in a ratio of 1:1:1:1. Participants were eligible for an unsubsidised or fully subsidised RDT and 1 of 2 levels of ACT subsidy (current retail price or an additional subsidy conditional on a positive RDT). Treatment decisions were documented 1 week later. Our primary outcome was uptake of malaria testing. Secondary outcomes evaluated ACT consumption among those with a negative test, a positive test or no test. Results Offering a free RDT increased the probability of testing by 18.6 percentage points (adjusted probability difference (APD), 95% CI 5.9 to 31.3). An offer of a conditional ACT subsidy did not have an additional effect on the probability of malaria testing when the RDT was free (APD=2.7; 95% CI −8.6 to 14.1). However, receiving the conditional ACT subsidy increased the probability of taking an ACT following a positive RDT by 19.5 percentage points (APD, 95% CI 2.2 to 36.8). Overall, the proportion who took ACT following a negative test was lower than those who took ACT without being tested, indicated improved targeting among those who were tested. Conclusions Both subsidies improved appropriate fever management, demonstrating the impact of these costs on decision making. However, the conditional ACT subsidy did not increase testing. We conclude that each of the subsidies primarily impacts the most immediate decision. Trial registration number NCT02199977. PMID:28588946
Space debris detection in optical image sequences.
Xi, Jiangbo; Wen, Desheng; Ersoy, Okan K; Yi, Hongwei; Yao, Dalei; Song, Zongxi; Xi, Shaobo
2016-10-01
We present a high-accuracy, low false-alarm rate, and low computational-cost methodology for removing stars and noise and detecting space debris with low signal-to-noise ratio (SNR) in optical image sequences. First, time-index filtering and bright star intensity enhancement are implemented to remove stars and noise effectively. Then, a multistage quasi-hypothesis-testing method is proposed to detect the pieces of space debris with continuous and discontinuous trajectories. For this purpose, a time-index image is defined and generated. Experimental results show that the proposed method can detect space debris effectively without any false alarms. When the SNR is higher than or equal to 1.5, the detection probability can reach 100%, and when the SNR is as low as 1.3, 1.2, and 1, it can still achieve 99%, 97%, and 85% detection probabilities, respectively. Additionally, two large sets of image sequences are tested to show that the proposed method performs stably and effectively.
Brick tunnel randomization and the momentum of the probability mass.
Kuznetsova, Olga M
2015-12-30
The allocation space of an unequal-allocation permuted block randomization can be quite wide. The development of unequal-allocation procedures with a narrower allocation space, however, is complicated by the need to preserve the unconditional allocation ratio at every step (the allocation ratio preserving (ARP) property). When the allocation paths are depicted on the K-dimensional unitary grid, where allocation to the l-th treatment is represented by a step along the l-th axis, l = 1 to K, the ARP property can be expressed in terms of the center of the probability mass after i allocations. Specifically, for an ARP allocation procedure that randomizes subjects to K treatment groups in w1 :⋯:wK ratio, w1 +⋯+wK =1, the coordinates of the center of the mass are (w1 i,…,wK i). In this paper, the momentum with respect to the center of the probability mass (expected imbalance in treatment assignments) is used to compare ARP procedures in how closely they approximate the target allocation ratio. It is shown that the two-arm and three-arm brick tunnel randomizations (BTR) are the ARP allocation procedures with the tightest allocation space among all allocation procedures with the same allocation ratio; the two-arm BTR is the minimum-momentum two-arm ARP allocation procedure. Resident probabilities of two-arm and three-arm BTR are analytically derived from the coordinates of the center of the probability mass; the existence of the respective transition probabilities is proven. Probability of deterministic assignments with BTR is found generally acceptable. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marleau, Peter; Monterial, Mateusz; Clarke, Shaun
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less
Evaluation of advanced multiplex short tandem repeat systems in pairwise kinship analysis.
Tamura, Tomonori; Osawa, Motoki; Ochiai, Eriko; Suzuki, Takanori; Nakamura, Takashi
2015-09-01
The AmpFLSTR Identifiler Kit, comprising 15 autosomal short tandem repeat (STR) loci, is commonly employed in forensic practice for calculating match probabilities and parentage testing. The conventional system exhibits insufficient estimation for kinship analysis such as sibship testing because of shortness of examined loci. This study evaluated the power of the PowerPlex Fusion System, GlobalFiler Kit, and PowerPlex 21 System, which comprise more than 20 autosomal STR loci, to estimate pairwise blood relatedness (i.e., parent-child, full siblings, second-degree relatives, and first cousins). The genotypes of all 24 STR loci in 10,000 putative pedigrees were constructed by simulation. The likelihood ratio for each locus was calculated from joint probabilities for relatives and non-relatives. The combined likelihood ratio was calculated according to the product rule. The addition of STR loci improved separation between relatives and non-relatives. However, these systems were less effectively extended to the inference for first cousins. In conclusion, these advanced systems will be useful in forensic personal identification, especially in the evaluation of full siblings and second-degree relatives. Moreover, the additional loci may give rise to two major issues of more frequent mutational events and several pairs of linked loci on the same chromosome. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Sako, Wataru; Abe, Takashi; Izumi, Yuishin; Harada, Masafumi; Kaji, Ryuji
2016-05-01
Glutamate (Glu)-induced excitotoxicity has been implicated in the neuronal loss of amyotrophic lateral sclerosis. To test the hypothesis that Glu in the primary motor cortex contributes to disease severity and/or duration, the Glu level was investigated using MR spectroscopy. Seventeen patients with amyotrophic lateral sclerosis were diagnosed according to the El Escorial criteria for suspected, possible, probable or definite amyotrophic lateral sclerosis, and enrolled in this cross-sectional study. We measured metabolite concentrations, including N-acetyl aspartate (NAA), creatine, choline, inositol, Glu and glutamine, and performed partial correlation between each metabolite concentration or NAA/Glu ratio and disease severity or duration using age as a covariate. Considering our hypothesis that Glu is associated with neuronal cell death in amyotrophic lateral sclerosis, we investigated the ratio of NAA to Glu, and found a significant correlation between NAA/Glu and disease duration (r=-0.574, p=0.02). The "suspected" amyotrophic lateral sclerosis patients showed the same tendency as possible, probable and definite amyotrophic lateral sclerosis patients in regard to correlation of NAA/Glu ratio with disease duration. The other metabolites showed no significant correlation. Our findings suggested that glutamatergic neurons are less vulnerable compared to other neurons and this may be because inhibitory receptors are mainly located presynaptically, which supports the notion of Glu-induced excitotoxicity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recent Results with CVD Diamond Trackers
NASA Astrophysics Data System (ADS)
Adam, W.; Bauer, C.; Berdermann, E.; Bergonzo, P.; Bogani, F.; Borchi, E.; Brambilla, A.; Bruzzi, M.; Colledani, C.; Conway, J.; Dabrowski, W.; Delpierre, P.; Deneuville, A.; Dulinski, W.; van Eijk, B.; Fallou, A.; Fizzotti, F.; Foulon, F.; Friedl, M.; Gan, K. K.; Gheeraert, E.; Grigoriev, E.; Hallewell, G.; Hall-Wilton, R.; Han, S.; Hartjes, F.; Hrubec, J.; Husson, D.; Kagan, H.; Kania, D.; Kaplon, J.; Karl, C.; Kass, R.; Knöpfle, K. T.; Krammer, M.; Logiudice, A.; Lu, R.; Manfredi, P. F.; Manfredotti, C.; Marshall, R. D.; Meier, D.; Mishina, M.; Oh, A.; Pan, L. S.; Palmieri, V. G.; Pernicka, M.; Peitz, A.; Pirollo, S.; Polesello, P.; Pretzl, K.; Procario, M.; Re, V.; Riester, J. L.; Roe, S.; Roff, D.; Rudge, A.; Runolfsson, O.; Russ, J.; Schnetzer, S.; Sciortino, S.; Speziali, V.; Stelzer, H.; Stone, R.; Suter, B.; Tapper, R. J.; Tesarek, R.; Trawick, M.; Trischuk, W.; Vittone, E.; Walsh, A. M.; Wedenig, R.; Weilhammer, P.; White, C.; Ziock, H.; Zoeller, M.; RD42 Collaboration
1999-08-01
We present recent results on the use of Chemical Vapor Deposition (CVD) diamond microstrip detectors for charged particle tracking. A series of detectors was fabricated using 1 x 1 cm 2 diamonds. Good signal-to-noise ratios were observed using both slow and fast readout electronics. For slow readout electronics, 2 μs shaping time, the most probable signal-to-noise ratio was 50 to 1. For fast readout electronics, 25 ns peaking time, the most probable signal-to-noise ratio was 7 to 1. Using the first 2 x 4 cm 2 diamond from a production CVD reactor with slow readout electronics, the most probable signal-to-noise ratio was 23 to 1. The spatial resolution achieved for the detectors was consistent with the digital resolution expected from the detector pitch.
A scoring algorithm for predicting the presence of adult asthma: a prospective derivation study.
Tomita, Katsuyuki; Sano, Hiroyuki; Chiba, Yasutaka; Sato, Ryuji; Sano, Akiko; Nishiyama, Osamu; Iwanaga, Takashi; Higashimoto, Yuji; Haraguchi, Ryuta; Tohda, Yuji
2013-03-01
To predict the presence of asthma in adult patients with respiratory symptoms, we developed a scoring algorithm using clinical parameters. We prospectively analysed 566 adult outpatients who visited Kinki University Hospital for the first time with complaints of nonspecific respiratory symptoms. Asthma was comprehensively diagnosed by specialists using symptoms, signs, and objective tools including bronchodilator reversibility and/or the assessment of bronchial hyperresponsiveness (BHR). Multiple logistic regression analysis was performed to categorise patients and determine the accuracy of diagnosing asthma. A scoring algorithm using the symptom-sign score was developed, based on diurnal variation of symptoms (1 point), recurrent episodes (2 points), medical history of allergic diseases (1 point), and wheeze sound (2 points). A score of >3 had 35% sensitivity and 97% specificity for discriminating between patients with and without asthma and assigned a high probability of having asthma (accuracy 90%). A score of 1 or 2 points assigned intermediate probability (accuracy 68%). After providing additional data of forced expiratory volume in 1 second/forced vital capacity (FEV(1)/FVC) ratio <0.7, the post-test probability of having asthma was increased to 93%. A score of 0 points assigned low probability (accuracy 31%). After providing additional data of positive reversibility, the post-test probability of having asthma was increased to 88%. This pragmatic diagnostic algorithm is useful for predicting the presence of adult asthma and for determining the appropriate time for consultation with a pulmonologist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Paul S.; Keenan, Russell E.; Swartout, Jeffrey C.
For most chemicals, the Reference Dose (RfD) is based on data from animal testing. The uncertainty introduced by the use of animal models has been termed interspecies uncertainty. The magnitude of the differences between the toxicity of a chemical in humans and test animals and its uncertainty can be investigated by evaluating the inter-chemical variation in the ratios of the doses associated with similar toxicological endpoints in test animals and humans. This study performs such an evaluation on a data set of 64 anti-neoplastic drugs. The data set provides matched responses in humans and four species of test animals: mice,more » rats, monkeys, and dogs. While the data have a number of limitations, the data show that when the drugs are evaluated on a body weight basis: 1) toxicity generally increases with a species' body weight; however, humans are not always more sensitive than test animals; 2) the animal to human dose ratios were less than 10 for most, but not all, drugs; 3) the current practice of using data from multiple species when setting RfDs lowers the probability of having a large value for the ratio. These findings provide insight into inter-chemical variation in animal to human extrapolations and suggest the need for additional collection and analysis of matched toxicity data in humans and test animals.« less
Definition and Measurement of Selection Bias: From Constant Ratio to Constant Difference
ERIC Educational Resources Information Center
Cahan, Sorel; Gamliel, Eyal
2006-01-01
Despite its intuitive appeal and popularity, Thorndike's constant ratio (CR) model for unbiased selection is inherently inconsistent in "n"-free selection. Satisfaction of the condition for unbiased selection, when formulated in terms of success/acceptance probabilities, usually precludes satisfaction by the converse probabilities of…
Mental health difficulties in children with developmental coordination disorder.
Lingam, Raghu; Jongmans, Marian J; Ellis, Matthew; Hunt, Linda P; Golding, Jean; Emond, Alan
2012-04-01
To explore the associations between probable developmental coordination disorder (DCD) defined at age 7 years and mental health difficulties at age 9 to 10 years. We analyzed of prospectively collected data (N = 6902) from the Avon Longitudinal Study of Parents and Children. "Probable" DCD was defined by using Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision criteria as those children below the 15th centile of the Avon Longitudinal Study of Parents and Children Coordination Test, with functional limitations in activities of daily living or handwriting, excluding children with neurologic difficulties or an IQ <70. Mental health was measured by using the child-reported Short Moods and Feelings Questionnaire and the parent-reported Strengths and Difficulties Questionnaire. Multiple logistic regression models, with the use of multiple imputation to account for missing data, assessed the associations between probable DCD and mental health difficulties. Adjustments were made for environmental confounding factors, and potential mediating factors such as verbal IQ, associated developmental traits, bullying, self-esteem, and friendships. Children with probable DCD (N = 346) had an increased odds of self-reported depression, odds ratio: 2.08 (95% confidence interval: 1.36-3.19) and parent-reported mental health difficulties odds ratio: 4.23 (95% confidence interval: 3.10-5.77). The odds of mental health difficulties significantly decreased after accounting for verbal IQ, social communication, bullying, and self-esteem. Children with probable DCD had an increased risk of mental health difficulties that, in part, were mediated through associated developmental difficulties, low verbal IQ, poor self-esteem, and bullying. Prevention and treatment of mental health difficulties should be a key element of intervention for children with DCD.
Frei, Christopher R; Burgess, David S
2005-09-01
To evaluate the pharmacodynamics of four intravenous antimicrobial regimens-ceftriaxone 1 g, gatifloxacin 400 mg, levofloxacin 500 mg, and levofloxacin 750 mg, each every 24 hours-against recent Streptococcus pneumoniae isolates. Pharmacodynamic analysis using Monte Carlo simulation. The Surveillance Network (TSN) 2002 database. Streptococcus pneumoniae isolates (7866 isolates) were stratified according to penicillin susceptibilities as follows: susceptible (4593), intermediate (1986), and resistant (1287). Risk analysis software was used to simulate 10,000 patients by integrating published pharmacokinetic parameters, their variability, and minimum inhibitory concentration (MIC) distributions from the TSN database. Probability of target attainment was determined for percentage of time above the MIC (%T > MIC) from 0-100% for ceftriaxone and area under the concentration-time curve (AUC):MIC ratio from 0-150 for the fluoroquinolones. For ceftriaxone, probability of target attainment remained 90% or greater against the three isolate groups until a %T > MIC of 70% or greater, and it remained 90% or greater against susceptible and intermediate isolates over the entire interval (%T > MIC 0-100%). For levofloxacin 500 mg, probability of target attainment was 90% at an AUC:MIC < or = 30, but the curve declined sharply with further increases in pharmacodynamic target. Levofloxacin 750 mg achieved a probability of target attainment of 99% at an AUC:MIC ratio < or = 30; the probability remained approximately 90% until a target of 70 or greater, when it declined steeply. Gatifloxacin demonstrated a high probability (99%) of target attainment at an AUC:MIC ratio < or = 30, and it remained above 90% until a target of 70. Ceftriaxone maintained high probability of target attainment over a broad range of pharmacodynamic targets regardless of penicillin susceptibility (%T > MIC 0-60%). Levofloxacin 500 mg maintained high probability of target attainment for AUC:MIC ratios 0-30; whereas, levofloxacin 750 mg and gatifloxacin maintained high probability of target attainment for AUC:MIC ratios 0-60. Rate of decline in the pharmacodynamic curve was most pronounced for the two levofloxacin regimens and more gradual for gatifloxacin and ceftriaxone.
Cunningham, Shala
2013-01-01
Objective: The purpose of this study was to estimate the diagnostic accuracy of the ScreenAssist Lumbar Questionnaire (SALQ) to determine the presence of non-musculoskeletal pain or emergent musculoskeletal pain, in terms of its sensitivity and specificity, when compared with the assessment and diagnosis made by primary care providers. Methods: Subjects were patients presenting to a primary care physician’s office with the main complaint of low back pain. SALQ data were collected within 24 hours of the appointment. A 2-month post-visit chart review was performed in order to compare scores and recommendations made by the questionnaire with the assessment and diagnosis made by the physician. Results: The SALQ demonstrated a sensitivity of 100% (95% CI = 0.445–1.0) and specificity of 92% (95% CI = 0.831–0.920). The negative likelihood ratio was 0.11 (95% CI = 0.01–1.54) and the positive likelihood ratio was 9.36 (95% CI = 2.78–32). If the SALQ was positive, the post-test probability was 0.60. If the SALQ was negative, the post-test probability was 0.017. Discussion: Results from this study suggest that the SALQ can be used as an adjunct to the subjective history taking in a physical therapy evaluation to assist in the recognition of non-musculoskeletal or emergent musculoskeletal conditions requiring referral. PMID:24421613
Salia, Shemsedin Musefa; Mersha, Hagos Biluts; Aklilu, Abenezer Tirsit; Baleh, Abat Sahlu; Lund-Johansen, Morten
2018-06-01
Compound depressed skull fracture (DSF) is a neurosurgical emergency. Preoperative knowledge of dural status is indispensable for treatment decision making. This study aimed to determine predictors of dural tear from clinical and imaging characteristics in patients with compound DSF. This prospective, multicenter correlational study in neurosurgical hospitals in Addis Ababa, Ethiopia, included 128 patients operated on from January 1, 2016, to October 31, 2016. Clinical, imaging, and intraoperative findings were evaluated. Univariate and multivariate analyses were used to establish predictors of dural tear. A logistic regression model was developed to predict probability of dural tear. Model validation was done using the receiver operating characteristic curve. Dural tear was seen in 55.5% of 128 patients. Demographics, injury mechanism, clinical presentation, and site of DSF had no significant correlation with dural tear. In univariate and multivariate analyses, depth of fracture depression (odds ratio 1.3, P < 0.001), pneumocephalus (odds ratio 2.8, P = 0.005), and brain contusions/intracerebral hematoma (odds ratio 5.5, P < 0.001) were significantly correlated with dural tear. We developed a logistic regression model (diagnostic test) to calculate probability of dural tear. Using the receiver operating characteristic curve, we determined the cutoff value for a positive test giving the highest accuracy to be 30% with a corresponding sensitivity of 93.0% and specificity of 43.9%. Dural tear in compound DSF can be predicted with 93.0% sensitivity using preoperative findings and may guide treatment decision making in resource-limited settings where risk of extensive cranial surgery outweighs the benefit. Copyright © 2018 Elsevier Inc. All rights reserved.
Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita
2010-01-01
Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832
Evidence-based Diagnostics: Adult Septic Arthritis
Carpenter, Christopher R.; Schuur, Jeremiah D.; Everett, Worth W.; Pines, Jesse M.
2011-01-01
Background Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. Objectives The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Methods Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. Results The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 109–25 × 109/ L was 0.33; for 25 × 109–50 × 109/L, 1.06; for 50 × 109–100 × 109/L, 3.59; and exceeding 100 × 109/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (−LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 109/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Conclusions Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 109/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. PMID:21843213
Evidence-based diagnostics: adult septic arthritis.
Carpenter, Christopher R; Schuur, Jeremiah D; Everett, Worth W; Pines, Jesse M
2011-08-01
Acutely swollen or painful joints are common complaints in the emergency department (ED). Septic arthritis in adults is a challenging diagnosis, but prompt differentiation of a bacterial etiology is crucial to minimize morbidity and mortality. The objective was to perform a systematic review describing the diagnostic characteristics of history, physical examination, and bedside laboratory tests for nongonococcal septic arthritis. A secondary objective was to quantify test and treatment thresholds using derived estimates of sensitivity and specificity, as well as best-evidence diagnostic and treatment risks and anticipated benefits from appropriate therapy. Two electronic search engines (PUBMED and EMBASE) were used in conjunction with a selected bibliography and scientific abstract hand search. Inclusion criteria included adult trials of patients presenting with monoarticular complaints if they reported sufficient detail to reconstruct partial or complete 2 × 2 contingency tables for experimental diagnostic test characteristics using an acceptable criterion standard. Evidence was rated by two investigators using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS). When more than one similarly designed trial existed for a diagnostic test, meta-analysis was conducted using a random effects model. Interval likelihood ratios (LRs) were computed when possible. To illustrate one method to quantify theoretical points in the probability of disease whereby clinicians might cease testing altogether and either withhold treatment (test threshold) or initiate definitive therapy in lieu of further diagnostics (treatment threshold), an interactive spreadsheet was designed and sample calculations were provided based on research estimates of diagnostic accuracy, diagnostic risk, and therapeutic risk/benefits. The prevalence of nongonococcal septic arthritis in ED patients with a single acutely painful joint is approximately 27% (95% confidence interval [CI] = 17% to 38%). With the exception of joint surgery (positive likelihood ratio [+LR] = 6.9) or skin infection overlying a prosthetic joint (+LR = 15.0), history, physical examination, and serum tests do not significantly alter posttest probability. Serum inflammatory markers such as white blood cell (WBC) counts, erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) are not useful acutely. The interval LR for synovial white blood cell (sWBC) counts of 0 × 10(9)-25 × 10(9)/L was 0.33; for 25 × 10(9)-50 × 10(9)/L, 1.06; for 50 × 10(9)-100 × 10(9)/L, 3.59; and exceeding 100 × 10(9)/L, infinity. Synovial lactate may be useful to rule in or rule out the diagnosis of septic arthritis with a +LR ranging from 2.4 to infinity, and negative likelihood ratio (-LR) ranging from 0 to 0.46. Rapid polymerase chain reaction (PCR) of synovial fluid may identify the causative organism within 3 hours. Based on 56% sensitivity and 90% specificity for sWBC counts of >50 × 10(9)/L in conjunction with best-evidence estimates for diagnosis-related risk and treatment-related risk/benefit, the arthrocentesis test threshold is 5%, with a treatment threshold of 39%. Recent joint surgery or cellulitis overlying a prosthetic hip or knee were the only findings on history or physical examination that significantly alter the probability of nongonococcal septic arthritis. Extreme values of sWBC (>50 × 10(9)/L) can increase, but not decrease, the probability of septic arthritis. Future ED-based diagnostic trials are needed to evaluate the role of clinical gestalt and the efficacy of nontraditional synovial markers such as lactate. © 2011 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Anand, L. F. M.; Gudennavar, S. B.; Bubbly, S. G.; Kerur, B. R.
2015-12-01
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K β to K α intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak 137Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others' work, establishing a good agreement.
An Assessment of Early Competitive Prototyping for Major Defense Acquisition Programs
2016-04-30
with 20/80 share ratio for EMD; CPFF for test execution. o Percent change in PAUC from development baseline. -2.3%. 3. FAB -T–FET. The Air Force’s...Family of Advanced Beyond Line-of-Sight Terminals ( FAB -T) provides for survivable terminals for communicating strategic nuclear execution orders via...jam-resistant, low probability of intercept waveforms through the Milstar and Advanced Extremely High Frequency (AEHF) satellite constellations. FAB
Application of Bayes' theorem for pulse shape discrimination
NASA Astrophysics Data System (ADS)
Monterial, Mateusz; Marleau, Peter; Clarke, Shaun; Pozzi, Sara
2015-09-01
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. This allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. A time-correlated measurement of Am-Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.
Application of Bayes' theorem for pulse shape discrimination
Marleau, Peter; Monterial, Mateusz; Clarke, Shaun; ...
2015-06-14
A Bayesian approach is proposed for pulse shape discrimination of photons and neutrons in liquid organic scinitillators. Instead of drawing a decision boundary, each pulse is assigned a photon or neutron confidence probability. In addition, this allows for photon and neutron classification on an event-by-event basis. The sum of those confidence probabilities is used to estimate the number of photon and neutron instances in the data. An iterative scheme, similar to an expectation-maximization algorithm for Gaussian mixtures, is used to infer the ratio of photons-to-neutrons in each measurement. Therefore, the probability space adapts to data with varying photon-to-neutron ratios. Amore » time-correlated measurement of Am–Be and separate measurements of 137Cs, 60Co and 232Th photon sources were used to construct libraries of neutrons and photons. These libraries were then used to produce synthetic data sets with varying ratios of photons-to-neutrons. Probability weighted method that we implemented was found to maintain neutron acceptance rate of up to 90% up to photon-to-neutron ratio of 2000, and performed 9% better than the decision boundary approach. Furthermore, the iterative approach appropriately changed the probability space with an increasing number of photons which kept the neutron population estimate from unrealistically increasing.« less
Variation of fan tone steadiness for several inflow conditions
NASA Technical Reports Server (NTRS)
Balombin, J. R.
1978-01-01
An amplitude probability density function analysis technique for quantifying the degree of fan noise tone steadiness has been applied to data from a fan tested under a variety of inflow conditions. The test conditions included typical static operation, inflow control by a honeycomb/screen device and forward velocity in a wind tunnel simulating flight. The ratio of mean square sinusoidal-to-random signal content in the fundamental and second harmonic tones was found to vary by more than an order-of-magnitude. Some implications of these results concerning the nature of fan noise generation mechanisms are discussed.
Airborne radar technology for windshear detection
NASA Technical Reports Server (NTRS)
Hibey, Joseph L.; Khalaf, Camille S.
1988-01-01
The objectives and accomplishments of the two-and-a-half year effort to describe how returns from on-board Doppler radar are to be used to detect the presence of a wind shear are reported. The problem is modeled as one of first passage in terms of state variables, the state estimates are generated by a bank of extended Kalman filters working in parallel, and the decision strategy involves the use of a voting algorithm for a series of likelihood ratio tests. The performance issue for filtering is addressed in terms of error-covariance reduction and filter divergence, and the performance issue for detection is addressed in terms of using a probability measure transformation to derive theoretical expressions for the error probabilities of a false alarm and a miss.
A cost-benefit analysis of demand for food.
Hursh, S R; Raslear, T G; Shurtleff, D; Bauman, R; Simmons, L
1988-01-01
Laboratory studies of consumer demand theory require assumptions regarding the definition of price in the absence of a medium of exchange (money). In this study we test the proposition that the fundamental dimension of price is a cost-benefit ratio expressed as the effort expended per unit of food value consumed. Using rats as subjects, we tested the generality of this "unit price" concept by varying four dimensions of price: fixed-ratio schedule, number of food pellets per fixed-ratio completion, probability of reinforcement, and response lever weight or effort. Two levels of the last three factors were combined in a 2 x 2 x 2 design giving eight groups. Each group was studied under a series of six FR schedules. Using the nominal values of all factors to determine unit price, we found that grams of food consumed plotted as a function of unit price followed a single demand curve. Similarly, total work output (responses x effort) conformed to a single function when plotted in terms of unit price. These observations provided a template for interpreting the effects of biological factors, such as brain lesions or drugs, that might alter the cost-benefit ratio. PMID:3209958
Bertoldi, Eduardo G; Stella, Steffen F; Rohde, Luis Eduardo P; Polanczyk, Carisi A
2017-05-04
The aim of this research is to evaluate the relative cost-effectiveness of functional and anatomical strategies for diagnosing stable coronary artery disease (CAD), using exercise (Ex)-ECG, stress echocardiogram (ECHO), single-photon emission CT (SPECT), coronary CT angiography (CTA) or stress cardiacmagnetic resonance (C-MRI). Decision-analytical model, comparing strategies of sequential tests for evaluating patients with possible stable angina in low, intermediate and high pretest probability of CAD, from the perspective of a developing nation's public healthcare system. Hypothetical cohort of patients with pretest probability of CAD between 20% and 70%. The primary outcome is cost per correct diagnosis of CAD. Proportion of false-positive or false-negative tests and number of unnecessary tests performed were also evaluated. Strategies using Ex-ECG as initial test were the least costly alternatives but generated more frequent false-positive initial tests and false-negative final diagnosis. Strategies based on CTA or ECHO as initial test were the most attractive and resulted in similar cost-effectiveness ratios (I$ 286 and I$ 305 per correct diagnosis, respectively). A strategy based on C-MRI was highly effective for diagnosing stable CAD, but its high cost resulted in unfavourable incremental cost-effectiveness (ICER) in moderate-risk and high-risk scenarios. Non-invasive strategies based on SPECT have been dominated. An anatomical diagnostic strategy based on CTA is a cost-effective option for CAD diagnosis. Functional strategies performed equally well when based on ECHO. C-MRI yielded acceptable ICER only at low pretest probability, and SPECT was not cost-effective in our analysis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A predictive model to estimate the pretest probability of metastasis in patients with osteosarcoma.
Wang, Sisheng; Zheng, Shaoluan; Hu, Kongzu; Sun, Heyan; Zhang, Jinling; Rong, Genxiang; Gao, Jie; Ding, Nan; Gui, Binjie
2017-01-01
Osteosarcomas (OSs) represent a huge challenge to improve the overall survival, especially in metastatic patients. Increasing evidence indicates that both tumor-associated elements but also on host-associated elements are under a remarkable effect on the prognosis of cancer patients, especially systemic inflammatory response. By analyzing a series prognosis of factors, including age, gender, primary tumor size, tumor location, tumor grade, and histological classification, monocyte ratio, and NLR ratio, a clinical predictive model was established by using stepwise logistic regression involved circulating leukocyte to compute the estimated probabilities of metastases for OS patients. The clinical predictive model was described by the following equations: probability of developing metastases = ex/(1 + ex), x = -2.150 + (1.680 × monocyte ratio) + (1.533 × NLR ratio), where is the base of the natural logarithm, the assignment to each of the 2 variables is 1 if the ratio >1 (otherwise 0). The calculated AUC of the receiver-operating characteristic curve as 0.793 revealed well accuracy of this model (95% CI, 0.740-0.845). The predicted probabilities that we generated with the cross-validation procedure had a similar AUC (0.743; 95% CI, 0.684-0.803). The present model could be used to improve the outcomes of the metastases by developing a predictive model considering circulating leukocyte influence to estimate the pretest probability of developing metastases in patients with OS.
Joore, Manuela; Brunenberg, Danielle; Nelemans, Patricia; Wouters, Emiel; Kuijpers, Petra; Honig, Adriaan; Willems, Danielle; de Leeuw, Peter; Severens, Johan; Boonen, Annelies
2010-01-01
This article investigates whether differences in utility scores based on the EQ-5D and the SF-6D have impact on the incremental cost-utility ratios in five distinct patient groups. We used five empirical data sets of trial-based cost-utility studies that included patients with different disease conditions and severity (musculoskeletal disease, cardiovascular pulmonary disease, and psychological disorders) to calculate differences in quality-adjusted life-years (QALYs) based on EQ-5D and SF-6D utility scores. We compared incremental QALYs, incremental cost-utility ratios, and the probability that the incremental cost-utility ratio was acceptable within and across the data sets. We observed small differences in incremental QALYs, but large differences in the incremental cost-utility ratios and in the probability that these ratios were acceptable at a given threshold, in the majority of the presented cost-utility analyses. More specifically, in the patient groups with relatively mild health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the EQ-5D to estimate utility. While in the patient groups with worse health conditions the probability of acceptance of the incremental cost-utility ratio was considerably larger when using the SF-6D to estimate utility. Much of the appeal in using QALYs as measure of effectiveness in economic evaluations is in the comparability across conditions and interventions. The incomparability of the results of cost-utility analyses using different instruments to estimate a single index value for health severely undermines this aspect and reduces the credibility of the use of incremental cost-utility ratios for decision-making.
Exclusion probabilities and likelihood ratios with applications to kinship problems.
Slooten, Klaas-Jan; Egeland, Thore
2014-05-01
In forensic genetics, DNA profiles are compared in order to make inferences, paternity cases being a standard example. The statistical evidence can be summarized and reported in several ways. For example, in a paternity case, the likelihood ratio (LR) and the probability of not excluding a random man as father (RMNE) are two common summary statistics. There has been a long debate on the merits of the two statistics, also in the context of DNA mixture interpretation, and no general consensus has been reached. In this paper, we show that the RMNE is a certain weighted average of inverse likelihood ratios. This is true in any forensic context. We show that the likelihood ratio in favor of the correct hypothesis is, in expectation, bigger than the reciprocal of the RMNE probability. However, with the exception of pathological cases, it is also possible to obtain smaller likelihood ratios. We illustrate this result for paternity cases. Moreover, some theoretical properties of the likelihood ratio for a large class of general pairwise kinship cases, including expected value and variance, are derived. The practical implications of the findings are discussed and exemplified.
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. © 2015 Society for Risk Analysis.
Likelihood ratios for the prediction of preterm delivery with biomarkers.
Hee, Lene
2011-11-01
To conduct a literature search for selected biomarkers on preterm delivery and estimate their likelihood ratios (LR). Structured review. Low and high-risk populations and women with symptoms of preterm delivery. METHODS. Publications were identified in PubMed. LR on selected biomarkers for preterm delivery. In asymptomatic women with low risk of preterm delivery, the following biomarkers gave major shifts in probability (LR above 5): twins (LR+ 10), Ureaplasma urealyticum in amniotic fluid (LR+ of 10), cervical length <25mm (LR+ 6), salival estriol (LR+ 5) and various combined tests. In asymptomatic women with high risk of preterm delivery, short cervical length (LR+ 11, LR- 0.7), high serum tumor necrosis factor-alpha (LR+ 10, LR- 0.6) gave major shifts in probability. In women with symptoms of preterm delivery, major shifts in probability can be obtained from the following amniotic fluid biomarkers: high matrix metalloproteinase-8 (LR+ 23, LR- 0.6), Ureaplasma urealyticum (LR+ 19, LR- 0.8), high interleukin (IL)-6 (LR+ 9, LR- 0.2), IL-8 (LR+10, LR- 0.2) and tumor necrosis factor-alpha (LR+ 8, LR- 0.4). In serum IL-6 (LR+ 12, LR- 0.2), Cluster of Differentiation 163 (LR+9, LR-0.8) and various combined tests. Vaginal fetal fibronectin (LR+ 3 and LR- 0.5) and short cervical length (LR+ 2, LR- 0.3) gave LRs of some importance (LR below 5). Several biomarkers have been identified for assessment of risk of preterm delivery. Their clinical relevance depends on the efficacy of the interventions which can be offered to these patients. © 2011 The Author Acta Obstetricia et Gynecologica Scandinavica© 2011 Nordic Federation of Societies of Obstetrics and Gynecology.
Marrazzo, Antonio; Boscaino, Giovanni; Marrazzo, Emilia; Taormina, Pietra; Toesca, Antonio
2015-09-01
The need for performing axillary lymph-node dissection in early breast cancer when the sentinel lymph node (SLN) is positive has been questioned in recent years. The purpose of this study was to identify a low-risk subgroup of early breast cancer patients in whom surgical axillary staging could be avoided, and to assess the probability of having a positive lymph-node (LN). We evaluated the cohort of 612 consecutive women affected by early breast cancer. We considered age, tumor size, histological grade, vascular invasion, lymphatic invasion and cancer subtype (Luminal A, Luminal B HER-2+, Luminal B HER-2-, HER-2+, and Triple Negative) as variables for univariate and multivariate analyses to assess probability of there being a positive SLN o nonsentinel lymph node (NSLN). Chi-square, Fisher's Exact test and Student's t tests were used to investigate the relationship between variables; whereas logit models were used to estimate and quantify the strength of the relationship among some covariates and SLN or the number of metastases. A significant positive effect of vascular invasion and lymphatic invasion (odds ratios are 4 and 6), and a negative effect of TN (odds ratios is 10) were noted. With respect to positive NSLN, size alone has a significant (positive) effect on tumor presence, but focusing on the number of metastases, also age has a (negative) significant effect. This work shows correlation between subtypes and the probability of having positive SLN. Patients not expressing vascular invasion, lymphatic invasion and, moreover, a triple-negative tumor subtype may be good candidates for breast conservative surgery without axillary surgical staging. Copyright © 2015 IJS Publishing Group Limited. Published by Elsevier Ltd. All rights reserved.
Allen, Victoria B; Gurusamy, Kurinchi Selvan; Takwoingi, Yemisi; Kalia, Amun; Davidson, Brian R
2016-07-06
Surgical resection is the only potentially curative treatment for pancreatic and periampullary cancer. A considerable proportion of patients undergo unnecessary laparotomy because of underestimation of the extent of the cancer on computed tomography (CT) scanning. Laparoscopy can detect metastases not visualised on CT scanning, enabling better assessment of the spread of cancer (staging of cancer). This is an update to a previous Cochrane Review published in 2013 evaluating the role of diagnostic laparoscopy in assessing the resectability with curative intent in people with pancreatic and periampullary cancer. To determine the diagnostic accuracy of diagnostic laparoscopy performed as an add-on test to CT scanning in the assessment of curative resectability in pancreatic and periampullary cancer. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE via PubMed, EMBASE via OvidSP (from inception to 15 May 2016), and Science Citation Index Expanded (from 1980 to 15 May 2016). We included diagnostic accuracy studies of diagnostic laparoscopy in people with potentially resectable pancreatic and periampullary cancer on CT scan, where confirmation of liver or peritoneal involvement was by histopathological examination of suspicious (liver or peritoneal) lesions obtained at diagnostic laparoscopy or laparotomy. We accepted any criteria of resectability used in the studies. We included studies irrespective of language, publication status, or study design (prospective or retrospective). We excluded case-control studies. Two review authors independently performed data extraction and quality assessment using the QUADAS-2 tool. The specificity of diagnostic laparoscopy in all studies was 1 because there were no false positives since laparoscopy and the reference standard are one and the same if histological examination after diagnostic laparoscopy is positive. The sensitivities were therefore meta-analysed using a univariate random-effects logistic regression model. The probability of unresectability in people who had a negative laparoscopy (post-test probability for people with a negative test result) was calculated using the median probability of unresectability (pre-test probability) from the included studies, and the negative likelihood ratio derived from the model (specificity of 1 assumed). The difference between the pre-test and post-test probabilities gave the overall added value of diagnostic laparoscopy compared to the standard practice of CT scan staging alone. We included 16 studies with a total of 1146 participants in the meta-analysis. Only one study including 52 participants had a low risk of bias and low applicability concern in the patient selection domain. The median pre-test probability of unresectable disease after CT scanning across studies was 41.4% (that is 41 out of 100 participants who had resectable cancer after CT scan were found to have unresectable disease on laparotomy). The summary sensitivity of diagnostic laparoscopy was 64.4% (95% confidence interval (CI) 50.1% to 76.6%). Assuming a pre-test probability of 41.4%, the post-test probability of unresectable disease for participants with a negative test result was 0.20 (95% CI 0.15 to 0.27). This indicates that if a person is said to have resectable disease after diagnostic laparoscopy and CT scan, there is a 20% probability that their cancer will be unresectable compared to a 41% probability for those receiving CT alone.A subgroup analysis of people with pancreatic cancer gave a summary sensitivity of 67.9% (95% CI 41.1% to 86.5%). The post-test probability of unresectable disease after being considered resectable on both CT and diagnostic laparoscopy was 18% compared to 40.0% for those receiving CT alone. Diagnostic laparoscopy may decrease the rate of unnecessary laparotomy in people with pancreatic and periampullary cancer found to have resectable disease on CT scan. On average, using diagnostic laparoscopy with biopsy and histopathological confirmation of suspicious lesions prior to laparotomy would avoid 21 unnecessary laparotomies in 100 people in whom resection of cancer with curative intent is planned.
Screening for Learning and Memory Mutations: A New Approach.
Gallistel, C R; King, A P; Daniel, A M; Freestone, D; Papachristos, E B; Balci, F; Kheifets, A; Zhang, J; Su, X; Schiff, G; Kourtev, H
2010-01-30
We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7-9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available.
The drag characteristics of several airships determined by deceleration tests
NASA Technical Reports Server (NTRS)
Thompson, F L; Kirschbaum, H W
1932-01-01
This report presents the results of deceleration tests conducted for the purpose of determining the drag characteristics of six airships. The tests were made with airships of various shapes and sizes belonging to the Army, the Navy, and the Goodyear-Zeppelin Corporation. Drag coefficients for the following airships are shown: Army TC-6, TC-10, and TE-2; Navy Los Angeles and ZMC-2; Goodyear Puritan. The coefficients vary from about 0.045 for the small blunt airships to 0.023 for the relatively large slender Los Angeles. This variation may be due to a combination of effects, but the most important of these is probably the effect of length-diameter ratio.
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator-outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
ERIC Educational Resources Information Center
Hong, Guanglei; Deutsch, Jonah; Hill, Heather D.
2015-01-01
Conventional methods for mediation analysis generate biased results when the mediator--outcome relationship depends on the treatment condition. This article shows how the ratio-of-mediator-probability weighting (RMPW) method can be used to decompose total effects into natural direct and indirect effects in the presence of treatment-by-mediator…
NASA Technical Reports Server (NTRS)
Carreno, Victor
2006-01-01
This document describes a method to demonstrate that a UAS, operating in the NAS, can avoid collisions with an equivalent level of safety compared to a manned aircraft. The method is based on the calculation of a collision probability for a UAS , the calculation of a collision probability for a base line manned aircraft, and the calculation of a risk ratio given by: Risk Ratio = P(collision_UAS)/P(collision_manned). A UAS will achieve an equivalent level of safety for collision risk if the Risk Ratio is less than or equal to one. Calculation of the probability of collision for UAS and manned aircraft is accomplished through event/fault trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anand, L. F. M.; Gudennavar, S. B., E-mail: shivappa.b.gudennavar@christuniversity.in; Bubbly, S. G.
The K to L shell total vacancy transfer probabilities of low Z elements Co, Ni, Cu, and Zn are estimated by measuring the K{sub β} to K{sub α} intensity ratio adopting the 2π-geometry. The target elements were excited by 32.86 keV barium K-shell X-rays from a weak {sup 137}Cs γ-ray source. The emitted K-shell X-rays were detected using a low energy HPGe X-ray detector coupled to a 16 k MCA. The measured intensity ratios and the total vacancy transfer probabilities are compared with theoretical results and others’ work, establishing a good agreement.
A method for developing design diagrams for ceramic and glass materials using fatigue data
NASA Technical Reports Server (NTRS)
Heslin, T. M.; Magida, M. B.; Forrest, K. A.
1986-01-01
The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.
Surveillance of industrial processes with correlated parameters
White, Andrew M.; Gross, Kenny C.; Kubic, William L.; Wigeland, Roald A.
1996-01-01
A system and method for surveillance of an industrial process. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions.
Sugihara, Masahiro
2010-01-01
In survival analysis, treatment effects are commonly evaluated based on survival curves and hazard ratios as causal treatment effects. In observational studies, these estimates may be biased due to confounding factors. The inverse probability of treatment weighted (IPTW) method based on the propensity score is one of the approaches utilized to adjust for confounding factors between binary treatment groups. As a generalization of this methodology, we developed an exact formula for an IPTW log-rank test based on the generalized propensity score for survival data. This makes it possible to compare the group differences of IPTW Kaplan-Meier estimators of survival curves using an IPTW log-rank test for multi-valued treatments. As causal treatment effects, the hazard ratio can be estimated using the IPTW approach. If the treatments correspond to ordered levels of a treatment, the proposed method can be easily extended to the analysis of treatment effect patterns with contrast statistics. In this paper, the proposed method is illustrated with data from the Kyushu Lipid Intervention Study (KLIS), which investigated the primary preventive effects of pravastatin on coronary heart disease (CHD). The results of the proposed method suggested that pravastatin treatment reduces the risk of CHD and that compliance to pravastatin treatment is important for the prevention of CHD. (c) 2009 John Wiley & Sons, Ltd.
Choi, Sungim; Jung, Kyung Hwa; Son, Hyo-Ju; Lee, Seung Hyun; Hong, Jung Min; Kim, Min Chul; Kim, Min Jae; Chong, Yong Pil; Sung, Heungsup; Lee, Sang-Oh; Choi, Sang-Ho; Kim, Yang Soo; Woo, Jun Hee; Kim, Sung-Han
2018-05-01
Interferon (IFN)-γ-releasing assay for diagnosing tuberculosis (TB) has shown promise; however, there are only a few reports on usefulness of the QuantiFERON-TB Gold In-Tube test (QFT-GIT) for diagnosing TB vertebral osteomyelitis. All patients presenting at a tertiary hospital between January 2010 and July 2016 with suspected TB vertebral osteomyelitis were retrospectively enrolled to evaluate the diagnostic performance of QFT-GIT. We used QFT-GIT to measure the IFN-γ response to ESAT-6, CFP-10 and TB7.7. A total of 141 patients were enrolled; 32 (23%) were categorized as having confirmed TB, (1%) as probable TB, 14 (10%) as possible TB and 93 (66%) as not TB. Of these, 16 patients with probable and possible TB were excluded from the final analysis. Chronic granulomas with/without necrosis, acid-fast bacilli stain, M. tuberculosis polymerase chain reaction and cultures for M. tuberculosis were positive in 14 (44%), 12 (38%), 22 (69%) and 28 (88%) patients, respectively, among the 32 patients with confirmed TB. The overall sensitivity, specificity, positive predictive value, negative predictive value, likelihood ratio for a positive result, and likelihood ratio for a negative result of the QFT-GIT for TB vertebral osteomyelitis were 91% (95% confidence interval [CI], 75-98%), 65% (95% CI, 54-75%), 50% (95% CI, 42-58%), 95% (95% CI, 86-98%), 2.59 (95% CI, 1.89-3.55) and 0.14 (95% CI, 0.05-0.43), respectively. The QFT-GIT appears to be a useful adjunct test for diagnosing TB vertebral osteomyelitis because the negative test results may be useful for excluding a diagnosis of active TB vertebral osteomyelitis.
Gismervik, Sigmund Ø; Drogset, Jon O; Granviken, Fredrik; Rø, Magne; Leivseth, Gunnar
2017-01-25
Physical examination tests of the shoulder (PETS) are clinical examination maneuvers designed to aid the assessment of shoulder complaints. Despite more than 180 PETS described in the literature, evidence of their validity and usefulness in diagnosing the shoulder is questioned. This meta-analysis aims to use diagnostic odds ratio (DOR) to evaluate how much PETS shift overall probability and to rank the test performance of single PETS in order to aid the clinician's choice of which tests to use. This study adheres to the principles outlined in the Cochrane guidelines and the PRISMA statement. A fixed effect model was used to assess the overall diagnostic validity of PETS by pooling DOR for different PETS with similar biomechanical rationale when possible. Single PETS were assessed and ranked by DOR. Clinical performance was assessed by sensitivity, specificity, accuracy and likelihood ratio. Six thousand nine-hundred abstracts and 202 full-text articles were assessed for eligibility; 20 articles were eligible and data from 11 articles could be included in the meta-analysis. All PETS for SLAP (superior labral anterior posterior) lesions pooled gave a DOR of 1.38 [1.13, 1.69]. The Supraspinatus test for any full thickness rotator cuff tear obtained the highest DOR of 9.24 (sensitivity was 0.74, specificity 0.77). Compression-Rotation test obtained the highest DOR (6.36) among single PETS for SLAP lesions (sensitivity 0.43, specificity 0.89) and Hawkins test obtained the highest DOR (2.86) for impingement syndrome (sensitivity 0.58, specificity 0.67). No single PETS showed superior clinical test performance. The clinical performance of single PETS is limited. However, when the different PETS for SLAP lesions were pooled, we found a statistical significant change in post-test probability indicating an overall statistical validity. We suggest that clinicians choose their PETS among those with the highest pooled DOR and to assess validity to their own specific clinical settings, review the inclusion criteria of the included primary studies. We further propose that future studies on the validity of PETS use randomized research designs rather than the accuracy design relying less on well-established gold standard reference tests and efficient treatment options.
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
NASA Astrophysics Data System (ADS)
Hermans, Julie; André, Luc; Navez, Jacques; Pernet, Philippe; Dubois, Philippe
2011-03-01
Biogenic calcites may contain considerable magnesium concentrations, significantly higher than those observed in inorganic calcites. Control of ion concentrations in the calcifying space by transport systems and properties of the organic matrix of mineralization are probably involved in the incorporation of high magnesium quantities in biogenic calcites, but their relative effects have never been quantified. In vitro precipitation experiments performed at different Mg/Ca ratios in the solution and in the presence of soluble organic matrix macromolecules (SOM) extracted from sea urchin tests and spines showed that, at a constant temperature, magnesium incorporation in the precipitated minerals was mainly dependent on the Mg/Ca ratio of the solution. However, a significant increase in magnesium incorporation was observed in the presence of SOM compared with control experiments. Furthermore, this effect was more pronounced with SOM extracted from the test, which was richer in magnesium than the spines. According to SEM observations, amorphous calcium carbonate was precipitated at high Mg/Casolution. The observed predominant effect of Mg/Casolution, probably mediated in vivo by ion transport to and from the calcifying space, was suggested to induce and stabilize a transient magnesium-rich amorphous phase essential to the formation of high magnesium calcites. Aspartic acid rich proteins, shown to be more abundant in the test than in the spine matrix, further stabilize this amorphous phase. The involvement of the organic matrix in this process can explain the observation that sympatric organisms or even different skeletal elements of the same individual present different skeletal magnesium concentrations.
Accuracy of gestalt perception of acute chest pain in predicting coronary artery disease
das Virgens, Cláudio Marcelo Bittencourt; Lemos Jr, Laudenor; Noya-Rabelo, Márcia; Carvalhal, Manuela Campelo; Cerqueira Junior, Antônio Maurício dos Santos; Lopes, Fernanda Oliveira de Andrade; de Sá, Nicole Cruz; Suerdieck, Jéssica Gonzalez; de Souza, Thiago Menezes Barbosa; Correia, Vitor Calixto de Almeida; Sodré, Gabriella Sant'Ana; da Silva, André Barcelos; Alexandre, Felipe Kalil Beirão; Ferreira, Felipe Rodrigues Marques; Correia, Luís Cláudio Lemos
2017-01-01
AIM To test accuracy and reproducibility of gestalt to predict obstructive coronary artery disease (CAD) in patients with acute chest pain. METHODS We studied individuals who were consecutively admitted to our Chest Pain Unit. At admission, investigators performed a standardized interview and recorded 14 chest pain features. Based on these features, a cardiologist who was blind to other clinical characteristics made unstructured judgment of CAD probability, both numerically and categorically. As the reference standard for testing the accuracy of gestalt, angiography was required to rule-in CAD, while either angiography or non-invasive test could be used to rule-out. In order to assess reproducibility, a second cardiologist did the same procedure. RESULTS In a sample of 330 patients, the prevalence of obstructive CAD was 48%. Gestalt’s numerical probability was associated with CAD, but the area under the curve of 0.61 (95%CI: 0.55-0.67) indicated low level of accuracy. Accordingly, categorical definition of typical chest pain had a sensitivity of 48% (95%CI: 40%-55%) and specificity of 66% (95%CI: 59%-73%), yielding a negligible positive likelihood ratio of 1.4 (95%CI: 0.65-2.0) and negative likelihood ratio of 0.79 (95%CI: 0.62-1.02). Agreement between the two cardiologists was poor in the numerical classification (95% limits of agreement = -71% to 51%) and categorical definition of typical pain (Kappa = 0.29; 95%CI: 0.21-0.37). CONCLUSION Clinical judgment based on a combination of chest pain features is neither accurate nor reproducible in predicting obstructive CAD in the acute setting. PMID:28400920
The Quantitative Science of Evaluating Imaging Evidence.
Genders, Tessa S S; Ferket, Bart S; Hunink, M G Myriam
2017-03-01
Cardiovascular diagnostic imaging tests are increasingly used in everyday clinical practice, but are often imperfect, just like any other diagnostic test. The performance of a cardiovascular diagnostic imaging test is usually expressed in terms of sensitivity and specificity compared with the reference standard (gold standard) for diagnosing the disease. However, evidence-based application of a diagnostic test also requires knowledge about the pre-test probability of disease, the benefit of making a correct diagnosis, the harm caused by false-positive imaging test results, and potential adverse effects of performing the test itself. To assist in clinical decision making regarding appropriate use of cardiovascular diagnostic imaging tests, we reviewed quantitative concepts related to diagnostic performance (e.g., sensitivity, specificity, predictive values, likelihood ratios), as well as possible biases and solutions in diagnostic performance studies, Bayesian principles, and the threshold approach to decision making. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.
2016-02-16
Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less
Dynamical study of low Earth orbit debris collision avoidance using ground based laser
NASA Astrophysics Data System (ADS)
Khalifa, N. S.
2015-06-01
The objective of this paper was to investigate the orbital velocity changes due to the effect of ground based laser force. The resulting perturbations of semi-major axis, miss distance and collision probability of two approaching objects are studied. The analytical model is applied for low Earth orbit debris of different eccentricities and area to mass ratio and the numerical test shows that laser of medium power ∼5 kW can perform a small change Δ V ‾ of an average magnitude of 0.2 cm/s which can be accumulated over time to be about 3 cm/day. Moreover, it is confirmed that applying laser Δ V ‾ results in decreasing collision probability and increasing miss distance in order to avoid collision.
Santos-Ciminera, Patricia D; Acheé, Nicole L; Quinnan, Gerald V; Roberts, Donald R
2004-09-01
We evaluated polymerase chain reaction (PCR) to confirm immunoassays for malaria parasites in mosquito pools after a failure to detect malaria with PCR during an outbreak in which pools tested positive using VecTest and enzyme-linked immunosorbent assay (ELISA). We combined VecTest, ELISA, and PCR to detect Plasmodium falciparum and Plasmodium vivax VK 210. Each mosquito pool, prepared in triplicate, consisted of 1 exposed Anopheles stephensi and up to 9 unfed mosquitoes. The results of VecTest and ELISA were concordant. DNA from a subset of the pools, 1 representative of each ratio of infected to uninfected mosquitoes, was extracted and used as template in PCR. All P. vivax pools were PCR positive but some needed additional processing for removal of apparent inhibitors before positive results were obtained. One of the pools selected for P. falciparum was negative by PCR, probably because of losses or contamination during DNA extraction; 2 remaining pools at this ratio were PCR positive. Testing pools by VecTest, ELISA, and PCR is feasible, and PCR is useful for confirmation of immunoassays. An additional step might be needed to remove potential inhibitors from pools prior to PCR.
Paternity testing that involves a DNA mixture.
Mortera, Julia; Vecchiotti, Carla; Zoppis, Silvia; Merigioli, Sara
2016-07-01
Here we analyse a complex disputed paternity case, where the DNA of the putative father was extracted from his corpse that had been inhumed for over 20 years. This DNA was contaminated and appears to be a mixture of at least two individuals. Furthermore, the mother's DNA was not available. The DNA mixture was analysed so as to predict the most probable genotypes of each contributor. The major contributor's profile was then used to compute the likelihood ratio for paternity. We also show how to take into account a dropout allele and the possibility of mutation in paternity testing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Chang, W H; Lin, S K; Jann, M W; Lam, Y W; Chen, T Y; Chen, C T; Hu, W H; Yeh, E K
1989-07-01
Twelve male chronic schizophrenic inpatients, neuroleptic-free for at least 4 weeks, were given an oral test dose of 10 mg haloperidol (HAL) and reduced HAL (RHAL) in a random order, with a 2-week interval. Two weeks after the last test dose, the patients were given HAL, 5 mg orally twice daily for 7 days. Blood samples were drawn at baseline and between 0.5 and 24 hr after the test doses, and during HAL treatment as well. Plasma drug concentrations and homovanillic acid (HVA) levels were measured with high-performance liquid chromatography using electrochemical detection. HAL, but not RHAL, produced increments in plasma HVA (pHVA) levels at 24 hr after a test dose. pHVA levels remained higher than baseline during HAL treatment. Detectable interconversion between HAL and RHAL was observed in eight patients. The capacity of the reductive drug-metabolizing enzyme system, however, was greater than that of the oxidative processes. The plasma RHAL:HAL ratios on days 6 and 7 were higher than and positively correlated with those at Tmax after a single dose of HAL and were negatively correlated with the HAL:RHAL ratios at Tmax after a single dose of RHAL. Thus, both reductive and oxidative drug-metabolizing systems probably contribute to individual differences in plasma RHAL:HAL ratios in HAL-treated schizophrenic patients.
Internal Medicine residents use heuristics to estimate disease probability.
Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin
2015-01-01
Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing.
Suspected pulmonary embolism and lung scan interpretation: Trial of a Bayesian reporting method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, D.M.; Philbrick, J.T.; Schoonover, F.W.
The objective of this research is to determine whether a Bayesian method of lung scan (LS) reporting could influence the management of patients with suspected pulmonary embolism (PE). The study is performed by the following: (1) A descriptive study of the diagnostic process for suspected PE using the new reporting method; (2) a non-experimental evaluation of the reporting method comparing prospective patients and historical controls; and (3) a survey of physicians' reactions to the reporting innovation. Of 148 consecutive patients enrolled at the time of LS, 129 were completely evaluated; 75 patients scanned the previous year served as controls. Themore » LS results of patients with suspected PE were reported as posttest probabilities of PE calculated from physician-provided pretest probabilities and the likelihood ratios for PE of LS interpretations. Despite the Bayesian intervention, the confirmation or exclusion of PE was often based on inconclusive evidence. PE was considered by the clinician to be ruled out in 98% of patients with posttest probabilities less than 25% and ruled in for 95% of patients with posttest probabilities greater than 75%. Prospective patients and historical controls were similar in terms of tests ordered after the LS (e.g., pulmonary angiography). Patients with intermediate or indeterminate lung scan results had the highest proportion of subsequent testing. Most physicians (80%) found the reporting innovation to be helpful, either because it confirmed clinical judgement (94 cases) or because it led to additional testing (7 cases). Despite the probabilistic guidance provided by the study, the diagnosis of PE was often neither clearly established nor excluded. While physicians appreciated the innovation and were not confused by the terminology, their clinical decision making was not clearly enhanced.« less
Elmer, Jonathan; Scutella, Michael; Pullalarevu, Raghevesh; Wang, Bo; Vaghasia, Nishit; Trzeciak, Stephen; Rosario-Rivera, Bedda L.; Guyette, Francis X.; Rittenberger, Jon C.; Dezfulian, Cameron
2014-01-01
Purpose Previous observational studies have inconsistently associated early hyperoxia with worse outcomes after cardiac arrest and have methodological limitations. We tested this association using a high-resolution database controlling for multiple disease-specific markers of severity of illness and care processes. Methods This was a retrospective analysis of a single-center, prospective registry of consecutive cardiac arrest patients. We included patients who survived and were mechanically ventilated ≥24h after arrest. Our main exposure was arterial oxygen tension (PaO2), which we categorized hourly for 24 hours as severe hyperoxia (>300mmHg), moderate or probable hyperoxia (101-299mmHg), normoxia (60-100mmHg) or hypoxia (<60mmHg). We controlled for Utstein-style covariates, markers of disease severity and markers of care responsiveness. We performed unadjusted and multiple logistic regression to test the association between oxygen exposure and survival to discharge, and used ordered logistic regression to test the association of oxygen exposure with neurological outcome and Sequential Organ Failure Assessment (SOFA) score at 24h. Results Of 184 patients, 36% were exposed to severe hyperoxia and overall mortality was 54%. Severe hyperoxia, but not moderate or probable hyperoxia, was associated with decreased survival in both unadjusted and adjusted analysis (adjusted odds ratio (OR) for survival 0.83 per hour exposure, P=0.04). Moderate or probable hyperoxia was not associated with survival but was associated with improved SOFA score 24h (OR 0.92, P<0.01). Conclusion Severe hyperoxia was independently associated with decreased survival to hospital discharge. Moderate or probable hyperoxia was not associated with decreased survival and was associated with improved organ function at 24h. PMID:25472570
Wade, Darryl; Varker, Tracey; Forbes, David; O'Donnell, Meaghan
2014-01-01
The Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) is a brief alcohol screening test and a candidate for inclusion in recommended screening and brief intervention protocols for acute injury patients. The objective of the current study was to examine the performance of the AUDIT-C to risk stratify injury patients with regard to their probability of having an alcohol use disorder. Participants (n = 1,004) were from a multisite Australian acute injury study. Stratum-specific likelihood ratio (SSLR) analysis was used to examine the performance of previously recommended AUDIT-C risk zones based on a dichotomous cut-point (0 to 3, 4 to 12) and risk zones derived from SSLR analysis to estimate the probability of a current alcohol use disorder. Almost a quarter (23%) of patients met criteria for a current alcohol use disorder. SSLR analysis identified multiple AUDIT-C risk zones (0 to 3, 4 to 5, 6, 7 to 8, 9 to 12) with a wide range of posttest probabilities of alcohol use disorder, from 5 to 68%. The area under receiver operating characteristic curve (AUROC) score was 0.82 for the derived AUDIT-C zones and 0.70 for the recommended AUDIT-C zones. A comparison between AUROCs revealed that overall the derived zones performed significantly better than the recommended zones in being able to discriminate between patients with and without alcohol use disorder. The findings of SSLR analysis can be used to improve estimates of the probability of alcohol use disorder in acute injury patients based on AUDIT-C scores. In turn, this information can inform clinical interventions and the development of screening and intervention protocols in a range of settings. Copyright © 2013 by the Research Society on Alcoholism.
Surveillance of industrial processes with correlated parameters
White, A.M.; Gross, K.C.; Kubic, W.L.; Wigeland, R.A.
1996-12-17
A system and method for surveillance of an industrial process are disclosed. The system and method includes a plurality of sensors monitoring industrial process parameters, devices to convert the sensed data to computer compatible information and a computer which executes computer software directed to analyzing the sensor data to discern statistically reliable alarm conditions. The computer software is executed to remove serial correlation information and then calculate Mahalanobis distribution data to carry out a probability ratio test to determine alarm conditions. 10 figs.
Yuan, Chao; Wang, Xue-Min; Galzote, Carlos; Tan, Yi-Mei; Bhagat, Kamlesh V; Yuan, Zhi-Kang; Du, Jian-Fei; Tan, Yuan
2013-06-01
Human repeated insult patch test (HRIPT) is regarded as one of the confirmatory test in determining the safety of skin sensitizers. A number of important factors should be considered when conducting and interpreting the results of the HRIPT. To investigate for probable critical factors that influence the results of HRIPT with the same protocol in Shanghai and Mumbai. Two HRIPTs were carried out in Shanghai and Mumbai in 2011. Six identical products and 1% sodium lauryl sulfate were tested. Two Chinese dermatologists performed the grading in the two cities. Climate conditions of Shanghai and Mumbai were also recorded. For four lower reaction ratio products, cumulative irritation scores in the induction phase were higher in individuals whose ethnicity was Indian rather than Chinese. Reaction ratio of the same four products was highly correlated to the climatic parameters. The other two higher reaction ratio products and the positive control had no difference between the two ethnicities. Greater attention ought to be paid to the impact of climate on the results of HRIPT, especially for the mild irritation cosmetics when giving the interpretation. Greater emphasis also needs to be placed on the ethnicity of the subjects. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.
Dietz, Pavel; Quermann, Anne; van Poppel, Mireille Nicoline Maria; Striegel, Heiko; Schröter, Hannes; Ulrich, Rolf; Simon, Perikles
2018-01-01
In order to increase the value of randomized response techniques (RRTs) as tools for studying sensitive issues, the present study investigated whether the prevalence estimate for a sensitive item [Formula: see text] assessed with the unrelated questionnaire method (UQM) is influenced by changing the probability of receiving the sensitive question p. A short paper-and-pencil questionnaire was distributed to 1.243 university students assessing the 12-month prevalence of physical and cognitive doping using two versions of the UQM with different probabilities for receiving the sensitive question (p ≈ 1/3 and p ≈ 2/3). Likelihood ratio tests were used to assess whether the prevalence estimates for physical and cognitive doping differed significantly between p ≈ 1/3 and p ≈ 2/3. The order of questions (physical doping and cognitive doping) as well as the probability of receiving the sensitive question (p ≈ 1/3 or p ≈ 2/3) were counterbalanced across participants. Statistical power analyses were performed to determine sample size. The prevalence estimate for physical doping with p ≈ 1/3 was 22.5% (95% CI: 10.8-34.1), and 12.8% (95% CI: 7.6-18.0) with p ≈ 2/3. For cognitive doping with p ≈ 1/3, the estimated prevalence was 22.5% (95% CI: 11.0-34.1), whereas it was 18.0% (95% CI: 12.5-23.5) with p ≈ 2/3. Likelihood-ratio tests revealed that prevalence estimates for both physical and cognitive doping, respectively, did not differ significantly under p ≈ 1/3 and p ≈ 2/3 (physical doping: χ2 = 2.25, df = 1, p = 0.13; cognitive doping: χ2 = 0.49, df = 1, p = 0.48). Bayes factors computed with the Savage-Dickey method favored the null ("the prevalence estimates are identical under p ≈ 1/3 and p ≈ 2/3") over the alternative ("the prevalence estimates differ under p ≈ 1/3 and p ≈ 2/3") hypothesis for both physical doping (BF = 2.3) and cognitive doping (BF = 5.3). The present results suggest that prevalence estimates for physical and cognitive doping assessed by the UQM are largely unaffected by the probability for receiving the sensitive question p.
Michener, Lori A.; Doukas, William C.; Murphy, Kevin P.; Walsworth, Matthew K.
2011-01-01
Context: Type I superior labrum anterior-posterior (SLAP) lesions involve degenerative fraying and probably are not the cause of shoulder pain. Type II to IV SLAP lesions are tears of the labrum. Objective: To determine the diagnostic accuracy of patient history and the active compression, anterior slide, and crank tests for type I and type II to IV SLAP lesions. Design: Cohort study. Setting: Clinic. Patients or Other Participants: Fifty-five patients (47 men, 8 women; age = 40.6 ± 15.1 years) presenting with shoulder pain. Intervention(s): For each patient, an orthopaedic surgeon conducted a clinical examination of history of trauma; sudden onset of symptoms; history of popping, clicking, or catching; age; and active compression, crank, and anterior slide tests. The reference standard was the intraoperative diagnosis. The operating surgeon was blinded to the results of the clinical examination. Main Outcome Measure(s): Diagnostic utility was calculated using the receiver operating characteristic curve and area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (+LR), and negative likelihood ratio (−LR). Forward stepwise binary regression was used to determine a combination of tests for diagnosis. Results: No history item or physical examination test had diagnostic accuracy for type I SLAP lesions (n = 13). The anterior slide test had utility (AUC = 0.70, +LR = 2.25, −LR = 0.44) to confirm and exclude type II to IV SLAP lesions (n = 10). The combination of a history of popping, clicking, or catching and the anterior slide test demonstrated diagnostic utility for confirming type II to IV SLAP lesions (+LR = 6.00). Conclusions: The anterior slide test had limited diagnostic utility for confirming and excluding type II to IV SLAP lesions; diagnostic values indicated only small shifts in probability. However, the combination of the anterior slide test with a history of popping, clicking, or catching had moderate diagnostic utility for confirming type II to IV SLAP lesions. No single item or combination of history items and physical examination tests had diagnostic utility for type I SLAP lesions. PMID:21944065
Blyton, Michaela D J; Banks, Sam C; Peakall, Rod; Lindenmayer, David B
2012-02-01
The formal testing of mating system theories with empirical data is important for evaluating the relative importance of different processes in shaping mating systems in wild populations. Here, we present a generally applicable probability modelling framework to test the role of local mate availability in determining a population's level of genetic monogamy. We provide a significance test for detecting departures in observed mating patterns from model expectations based on mate availability alone, allowing the presence and direction of behavioural effects to be inferred. The assessment of mate availability can be flexible and in this study it was based on population density, sex ratio and spatial arrangement. This approach provides a useful tool for (1) isolating the effect of mate availability in variable mating systems and (2) in combination with genetic parentage analyses, gaining insights into the nature of mating behaviours in elusive species. To illustrate this modelling approach, we have applied it to investigate the variable mating system of the mountain brushtail possum (Trichosurus cunninghami) and compared the model expectations with the outcomes of genetic parentage analysis over an 18-year study. The observed level of monogamy was higher than predicted under the model. Thus, behavioural traits, such as mate guarding or selective mate choice, may increase the population level of monogamy. We show that combining genetic parentage data with probability modelling can facilitate an improved understanding of the complex interactions between behavioural adaptations and demographic dynamics in driving mating system variation. © 2011 Blackwell Publishing Ltd.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Detection performance in clutter with variable resolution
NASA Astrophysics Data System (ADS)
Schmieder, D. E.; Weathersby, M. R.
1983-07-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
An Experiment Quantifying The Effect Of Clutter On Target Detection
NASA Astrophysics Data System (ADS)
Weathersby, Marshall R.; Schmieder, David E.
1985-01-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
Zhang, Hui-Jie; Han, Peng; Sun, Su-Yun; Wang, Li-Ying; Yan, Bing; Zhang, Jin-Hua; Zhang, Wei; Yang, Shu-Yu; Li, Xue-Jun
2013-01-01
Obesity is related to hyperlipidemia and risk of cardiovascular disease. Health benefits of vegetarian diets have well-documented in the Western countries where both obesity and hyperlipidemia were prevalent. We studied the association between BMI and various lipid/lipoprotein measures, as well as between BMI and predicted coronary heart disease probability in lean, low risk populations in Southern China. The study included 170 Buddhist monks (vegetarians) and 126 omnivore men. Interaction between BMI and vegetarian status was tested in the multivariable regression analysis adjusting for age, education, smoking, alcohol drinking, and physical activity. Compared with omnivores, vegetarians had significantly lower mean BMI, blood pressures, total cholesterol, low density lipoprotein cholesterol, high density lipoprotein cholesterol, total cholesterol to high density lipoprotein ratio, triglycerides, apolipoprotein B and A-I, as well as lower predicted probability of coronary heart disease. Higher BMI was associated with unfavorable lipid/lipoprotein profile and predicted probability of coronary heart disease in both vegetarians and omnivores. However, the associations were significantly diminished in Buddhist vegetarians. Vegetarian diets not only lower BMI, but also attenuate the BMI-related increases of atherogenic lipid/ lipoprotein and the probability of coronary heart disease.
The FERRUM Project: Experimental Transition Probabilities of [Fe II] and Astrophysical Applications
NASA Technical Reports Server (NTRS)
Hartman, H.; Derkatch, A.; Donnelly, M. P.; Gull, T.; Hibbert, A.; Johannsson, S.; Lundberg, H.; Mannervik, S.; Norlin, L. -O.; Rostohar, D.
2002-01-01
We report on experimental transition probabilities for thirteen forbidden [Fe II] lines originating from three different metastable Fe II levels. Radiative lifetimes have been measured of two metastable states by applying a laser probing technique on a stored ion beam. Branching ratios for the radiative decay channels, i.e. M1 and E2 transitions, are derived from observed intensity ratios of forbidden lines in astrophysical spectra and compared with theoretical data. The lifetimes and branching ratios are combined to derive absolute transition probabilities, A-values. We present the first experimental lifetime values for the two Fe II levels a(sup 4)G(sub 9/2) and b(sup 2)H(sub 11/2) and A-values for 13 forbidden transitions from a(sup 6)S(sub 5/2), a(sup 4)G(sub 9/2) and b(sup 4)D(sub 7/2) in the optical region. A discrepancy between the measured and calculated values of the lifetime for the b(sup 2)H(sub 11/2) level is discussed in terms of level mixing. We have used the code CIV3 to calculate transition probabilities of the a(sup 6)D-a(sup 6)S transitions. We have also studied observational branching ratios for lines from 5 other metastable Fe II levels and compared them to calculated values. A consistency in the deviation between calibrated observational intensity ratios and theoretical branching ratios for lines in a wider wavelength region supports the use of [Fe II] lines for determination of reddening.
Aalbers, Jolien; O'Brien, Kirsty K; Chan, Wai-Sun; Falk, Gavin A; Teljeur, Conor; Dimitrov, Borislav D; Fahey, Tom
2011-06-01
Stratifying patients with a sore throat into the probability of having an underlying bacterial or viral cause may be helpful in targeting antibiotic treatment. We sought to assess the diagnostic accuracy of signs and symptoms and validate a clinical prediction rule (CPR), the Centor score, for predicting group A β-haemolytic streptococcal (GABHS) pharyngitis in adults (> 14 years of age) presenting with sore throat symptoms. A systematic literature search was performed up to July 2010. Studies that assessed the diagnostic accuracy of signs and symptoms and/or validated the Centor score were included. For the analysis of the diagnostic accuracy of signs and symptoms and the Centor score, studies were combined using a bivariate random effects model, while for the calibration analysis of the Centor score, a random effects model was used. A total of 21 studies incorporating 4,839 patients were included in the meta-analysis on diagnostic accuracy of signs and symptoms. The results were heterogeneous and suggest that individual signs and symptoms generate only small shifts in post-test probability (range positive likelihood ratio (+LR) 1.45-2.33, -LR 0.54-0.72). As a decision rule for considering antibiotic prescribing (score ≥ 3), the Centor score has reasonable specificity (0.82, 95% CI 0.72 to 0.88) and a post-test probability of 12% to 40% based on a prior prevalence of 5% to 20%. Pooled calibration shows no significant difference between the numbers of patients predicted and observed to have GABHS pharyngitis across strata of Centor score (0-1 risk ratio (RR) 0.72, 95% CI 0.49 to 1.06; 2-3 RR 0.93, 95% CI 0.73 to 1.17; 4 RR 1.14, 95% CI 0.95 to 1.37). Individual signs and symptoms are not powerful enough to discriminate GABHS pharyngitis from other types of sore throat. The Centor score is a well calibrated CPR for estimating the probability of GABHS pharyngitis. The Centor score can enhance appropriate prescribing of antibiotics, but should be used with caution in low prevalence settings of GABHS pharyngitis such as primary care.
Chen, Chunyi; Yang, Huamin
2016-08-22
The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
Choosing Fighting Competitors Among Men: Testosterone, Personality, and Motivations.
Borráz-León, Javier I; Cerda-Molina, Ana Lilia; Rantala, Markus J; Mayagoitia-Novales, Lilian
2018-01-01
Higher testosterone levels have been positively related to a variety of social behaviors and personality traits associated with intrasexual competition. The aim of this study was to evaluate the role of testosterone levels and personality traits such as aggressiveness, competitiveness, and self-esteem on the task of choosing a fighting competitor (a rival) with or without a motivation to fight. In Study 1, a group of 119 men participated in a task for choosing a rival through pictures of men with high-dominant masculinity versus low-dominant masculinity. Participants completed three personality questionnaires and donated two saliva samples (pre-test and post-test sample) to quantify their testosterone levels. We found that the probability of choosing high-dominant masculine men as rivals increased with higher aggressiveness scores. In Study 2, the task of choosing rivals was accompanied by motivations to fight (pictures of women with high or low waist-to-hip ratio [WHR]). In this context, we observed that the probability of choosing dominant masculine men as rivals depended on the WHR of the women. Overall, average levels of post-test testosterone, aggressiveness, and high self-esteem increased the probability to fight for women with low WHR independently of the dominance masculinity of the rivals. Our results indicate that human decisions, in the context of intrasexual competition and mate choice, are regulated by physiological and psychological mechanisms allowing men to increase their biological fitness. We discuss our results in the light of the plasticity of human behavior according to biological and environmental forces.
Cumulative probability of neodymium: YAG laser posterior capsulotomy after phacoemulsification.
Ando, Hiroshi; Ando, Nobuyo; Oshika, Tetsuro
2003-11-01
To retrospectively analyze the cumulative probability of neodymium:YAG (Nd:YAG) laser posterior capsulotomy after phacoemulsification and to evaluate the risk factors. Ando Eye Clinic, Kanagawa, Japan. In 3997 eyes that had phacoemulsification with an intact continuous curvilinear capsulorhexis, the cumulative probability of posterior capsulotomy was computed by Kaplan-Meier survival analysis and risk factors were analyzed using the Cox proportional hazards regression model. The variables tested were sex; age; type of cataract; preoperative best corrected visual acuity (BCVA); presence of diabetes mellitus, diabetic retinopathy, or retinitis pigmentosa; type of intraocular lens (IOL); and the year the operation was performed. The IOLs were categorized as 3-piece poly(methyl methacrylate) (PMMA), 1-piece PMMA, 3-piece silicone, and acrylic foldable. The cumulative probability of capsulotomy after cataract surgery was 1.95%, 18.50%, and 32.70% at 1, 3, and 5 years, respectively. Positive risk factors included a better preoperative BCVA (P =.0005; risk ratio [RR], 1.7; 95% confidence interval [CI], 1.3-2.5) and the presence of retinitis pigmentosa (P<.0001; RR, 6.6; 95% CI, 3.7-11.6). Women had a significantly greater probability of Nd:YAG laser posterior capsulotomy (P =.016; RR, 1.4; 95% CI, 1.1-1.8). The type of IOL was significantly related to the probability of Nd:YAG laser capsulotomy, with the foldable acrylic IOL having a significantly lower probability of capsulotomy. The 1-piece PMMA IOL had a significantly higher risk than 3-piece PMMA and 3-piece silicone IOLs. The probability of Nd:YAG laser capsulotomy was higher in women, in eyes with a better preoperative BCVA, and in patients with retinitis pigmentosa. The foldable acrylic IOL had a significantly lower probability of capsulotomy.
NASA Astrophysics Data System (ADS)
Gatzsche, Kathrin; Babel, Wolfgang; Falge, Eva; Pyles, Rex David; Tha Paw U, Kyaw; Raabe, Armin; Foken, Thomas
2018-05-01
The ACASA (Advanced Canopy-Atmosphere-Soil Algorithm) model, with a higher-order closure for tall vegetation, has already been successfully tested and validated for homogeneous spruce forests. The aim of this paper is to test the model using a footprint-weighted tile approach for a clearing with a heterogeneous structure of the underlying surface. The comparison with flux data shows a good agreement with a footprint-aggregated tile approach of the model. However, the results of a comparison with a tile approach on the basis of the mean land use classification of the clearing is not significantly different. It is assumed that the footprint model is not accurate enough to separate small-scale heterogeneities. All measured fluxes are corrected by forcing the energy balance closure of the test data either by maintaining the measured Bowen ratio or by the attribution of the residual depending on the fractions of sensible and latent heat flux to the buoyancy flux. The comparison with the model, in which the energy balance is closed, shows that the buoyancy correction for Bowen ratios > 1.5 better fits the measured data. For lower Bowen ratios, the correction probably lies between the two methods, but the amount of available data was too small to make a conclusion. With an assumption of similarity between water and carbon dioxide fluxes, no correction of the net ecosystem exchange is necessary for Bowen ratios > 1.5.
Cost-effectiveness of one-time genetic testing to minimize lifetime adverse drug reactions.
Alagoz, O; Durham, D; Kasirajan, K
2016-04-01
We evaluated the cost-effectiveness of one-time pharmacogenomic testing for preventing adverse drug reactions (ADRs) over a patient's lifetime. We developed a Markov-based Monte Carlo microsimulation model to represent the ADR events in the lifetime of each patient. The base-case considered a 40-year-old patient. We measured health outcomes in life years (LYs) and quality-adjusted LYs (QALYs) and estimated costs using 2013 US$. In the base-case, one-time genetic testing had an incremental cost-effectiveness ratio (ICER) of $43,165 (95% confidence interval (CI) is ($42,769,$43,561)) per additional LY and $53,680 per additional QALY (95% CI is ($53,182,$54,179)), hence under the base-case one-time genetic testing is cost-effective. The ICER values were most sensitive to the average probability of death due to ADR, reduction in ADR rate due to genetic testing, mean ADR rate and cost of genetic testing.
Wilks, Scott E; Croom, Beth
2008-05-01
The study examined whether social support functioned as a protective, resilience factor among Alzheimer's disease (AD) caregivers. Moderation and mediation models were used to test social support amid stress and resilience. A cross-sectional analysis of self-reported data was conducted. Measures of demographics, perceived stress, family support, friend support, overall social support, and resilience were administered to caregiver attendees (N=229) of two AD caregiver conferences. Hierarchical regression analysis showed the compounded impact of predictors on resilience. Odds ratios generated probability of high resilience given high stress and social supports. Social support moderation and mediation were tested via distinct series of regression equations. Path analyses illustrated effects on the models for significant moderation and/or mediation. Stress negatively influenced and accounted for most variation in resilience. Social support positively influenced resilience, and caregivers with high family support had the highest probability of elevated resilience. Moderation was observed among all support factors. No social support fulfilled the complete mediation criteria. Evidence of social support as a protective, moderating factor yields implications for health care practitioners who deliver services to assist AD caregivers, particularly the promotion of identification and utilization of supportive familial and peer relations.
Screening for Learning and Memory Mutations: A New Approach
Gallistel, C. R.; King, A. P.; Daniel, A. M.; Freestone, D.; Papachristos, E. B.; Balci, F.; Kheifets, A.; Zhang, J.; Su, X.; Schiff, G.; Kourtev, H.
2010-01-01
We describe a fully automated, live-in 24/7 test environment, with experimental protocols that measure the accuracy and precision with which mice match the ratio of their expected visit durations to the ratio of the incomes obtained from two hoppers, the progress of instrumental and classical conditioning (trials-to-acquisition), the accuracy and precision of interval timing, the effect of relative probability on the choice of a timed departure target, and the accuracy and precision of memory for the times of day at which food is available. The system is compact; it obviates the handling of the mice during testing; it requires negligible amounts of experimenter/technician time; and it delivers clear and extensive results from 3 protocols within a total of 7–9 days after the mice are placed in the test environment. Only a single 24-hour period is required for the completion of first protocol (the matching protocol), which is strong test of temporal and spatial estimation and memory mechanisms. Thus, the system permits the extensive screening of many mice in a short period of time and in limited space. The software is publicly available. PMID:20352069
2010-01-01
Background Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may - in a more implicit manner - influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Methods Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. Results The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Conclusions Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results. PMID:20158908
Houben, Paul H H; van der Weijden, Trudy; Winkens, Bjorn; Winkens, Ron A G; Grol, Richard P T M
2010-02-16
Abnormal results of diagnostic laboratory tests can be difficult to interpret when disease probability is very low. Although most physicians generally do not use Bayesian calculations to interpret abnormal results, their estimates of pretest disease probability and reasons for ordering diagnostic tests may--in a more implicit manner--influence test interpretation and further management. A better understanding of this influence may help to improve test interpretation and management. Therefore, the objective of this study was to examine the influence of physicians' pretest disease probability estimates, and their reasons for ordering diagnostic tests, on test result interpretation, posttest probability estimates and further management. Prospective study among 87 primary care physicians in the Netherlands who each ordered laboratory tests for 25 patients. They recorded their reasons for ordering the tests (to exclude or confirm disease or to reassure patients) and their pretest disease probability estimates. Upon receiving the results they recorded how they interpreted the tests, their posttest probability estimates and further management. Logistic regression was used to analyse whether the pretest probability and the reasons for ordering tests influenced the interpretation, the posttest probability estimates and the decisions on further management. The physicians ordered tests for diagnostic purposes for 1253 patients; 742 patients had an abnormal result (64%). Physicians' pretest probability estimates and their reasons for ordering diagnostic tests influenced test interpretation, posttest probability estimates and further management. Abnormal results of tests ordered for reasons of reassurance were significantly more likely to be interpreted as normal (65.8%) compared to tests ordered to confirm a diagnosis or exclude a disease (27.7% and 50.9%, respectively). The odds for abnormal results to be interpreted as normal were much lower when the physician estimated a high pretest disease probability, compared to a low pretest probability estimate (OR = 0.18, 95% CI = 0.07-0.52, p < 0.001). Interpretation and management of abnormal test results were strongly influenced by physicians' estimation of pretest disease probability and by the reason for ordering the test. By relating abnormal laboratory results to their pretest expectations, physicians may seek a balance between over- and under-reacting to laboratory test results.
NASA Astrophysics Data System (ADS)
Piatyszek, E.; Voignier, P.; Graillot, D.
2000-05-01
One of the aims of sewer networks is the protection of population against floods and the reduction of pollution rejected to the receiving water during rainy events. To meet these goals, managers have to equip the sewer networks with and to set up real-time control systems. Unfortunately, a component fault (leading to intolerable behaviour of the system) or sensor fault (deteriorating the process view and disturbing the local automatism) makes the sewer network supervision delicate. In order to ensure an adequate flow management during rainy events it is essential to set up procedures capable of detecting and diagnosing these anomalies. This article introduces a real-time fault detection method, applicable to sewer networks, for the follow-up of rainy events. This method consists in comparing the sensor response with a forecast of this response. This forecast is provided by a model and more precisely by a state estimator: a Kalman filter. This Kalman filter provides not only a flow estimate but also an entity called 'innovation'. In order to detect abnormal operations within the network, this innovation is analysed with the binary sequential probability ratio test of Wald. Moreover, by crossing available information on several nodes of the network, a diagnosis of the detected anomalies is carried out. This method provided encouraging results during the analysis of several rains, on the sewer network of Seine-Saint-Denis County, France.
Effects of poststroke pyrexia on stroke outcome : a meta-analysis of studies in patients.
Hajat, C; Hajat, S; Sharma, P
2000-02-01
The effect of pyrexia on cerebral ischemia has been extensively studied in animals. In humans, however, such studies are small and the results conflicting. We undertook a meta-analysis using all such published studies on the effect of hyperthermia on stroke outcome. Three databases were searched for all published studies that examined the relationship of raised temperature after stroke onset and eventual outcome. Combined probability values and odds ratios were obtained. A heterogeneity test was performed to ensure that the data were suitable for such an analysis. Morbidity and mortality were used as outcome measures. Nine studies were identified totaling 3790 patients, providing our study with 99% power to detect a 9% increase in morbidity and 84% power to detect a 1% increase in mortality for the pyrexial group. The combined odds ratio for mortality was 1.19 (95% CI, 0.99 to 1.43). A heterogeneity test was highly nonsignificant (P>0.05) for mortality, suggesting that the data were sufficiently similar to be meta-analyzed. Combined probability values were highly significant for both morbidity (P<0.0001) and mortality (P<0. 00000001). The results from this meta-analysis suggest that pyrexia after stroke onset is associated with a marked increase in morbidity and mortality. Measures should be taken to combat fever in the clinical setting to prevent stroke progression. The possible benefit of therapeutic hypothermia in the management of acute stroke should be further investigated.
Bonofiglio, Federico; Beyersmann, Jan; Schumacher, Martin; Koller, Michael; Schwarzer, Guido
2016-09-01
Meta-analysis of a survival endpoint is typically based on the pooling of hazard ratios (HRs). If competing risks occur, the HRs may lose translation into changes of survival probability. The cumulative incidence functions (CIFs), the expected proportion of cause-specific events over time, re-connect the cause-specific hazards (CSHs) to the probability of each event type. We use CIF ratios to measure treatment effect on each event type. To retrieve information on aggregated, typically poorly reported, competing risks data, we assume constant CSHs. Next, we develop methods to pool CIF ratios across studies. The procedure computes pooled HRs alongside and checks the influence of follow-up time on the analysis. We apply the method to a medical example, showing that follow-up duration is relevant both for pooled cause-specific HRs and CIF ratios. Moreover, if all-cause hazard and follow-up time are large enough, CIF ratios may reveal additional information about the effect of treatment on the cumulative probability of each event type. Finally, to improve the usefulness of such analysis, better reporting of competing risks data is needed. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Selection of a cardiac surgery provider in the managed care era.
Shahian, D M; Yip, W; Westcott, G; Jacobson, J
2000-11-01
Many health planners promote the use of competition to contain cost and improve quality of care. Using a standard econometric model, we examined the evidence for "value-based" cardiac surgery provider selection in eastern Massachusetts, where there is significant competition and managed care penetration. McFadden's conditional logit model was used to study cardiac surgery provider selection among 6952 patients and eight metropolitan Boston hospitals in 1997. Hospital predictor variables included beds, cardiac surgery case volume, objective clinical and financial performance, reputation (percent out-of-state referrals, cardiac residency program), distance from patient's home to hospital, and historical referral patterns. Subgroup analyses were performed for each major payer category. Distance from patient's home to hospital (odds ratio 0.90; P =.000) and the historical referral pattern from each patient's hometown (z = 45.305; P =.000) were important predictors in all models. A cardiac surgery residency enhanced the probability of selection (odds ratio 5.25; P =.000), as did percent out-of-state referrals (odds ratio 1.10; P =.001). Higher mortality rates were associated with decreased probability of selection (odds ratio 0.51; P =.027), but higher length of stay was paradoxically associated with greater probability (odds ratio 1.72; P =.000). Total hospital costs were irrelevant (odds ratio 1.00; P =.179). When analyzed by payer subgroup, Medicare patients appeared to select hospitals with both low mortality (odds ratio 0.43; P =.176) and short length of stay (odds ratio 0.76; P =.213), although the results did not achieve statistical significance. The commercial managed care subgroup exhibited the least "value-based" behavior. The odds ratio for length of stay was the highest of any group (odds ratio = 2.589; P =.000) and there was a subset of hospitals for which higher mortality was actually associated with greater likelihood of selection. The observable determinants of cardiac surgery provider selection are related to hospital reputation, historical referral patterns, and patient proximity, not objective clinical or cost performance. The paradoxic behavior of commercial managed care probably results from unobserved choice factors that are not primarily based on objective provider performance.
Method for estimating low-flow characteristics of ungaged streams in Indiana
Arihood, Leslie D.; Glatfelter, Dale R.
1991-01-01
Equations for estimating the 7-day, 2-year and 7oday, 10-year low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low-flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow-duration ratio, which is the 20-percent flow duration divided by the 90-percent flow duration. Flow-duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from the plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow-duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low-flow characteristics at 82 gaging stations where flow-duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-year and 7-day, 10-year low flows are 19 and 28 percent. When flow-duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46 and 61 percent. However, when stations having drainage areas of less than 10 square miles are excluded from the test, the standard errors decrease to 38 and 49 percent. Standard errors increase when stations with small basins are included, probably because some of the flow-duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow-duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and central physiographic zones of the State. Low-flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low-flow characteristic can be adjusted. The method is most accurate for sites having drainage areas ranging from 10 to 1,000 square miles and for predictions of 7-day, 10-year low flows ranging from 0.5 to 340 cubic feet per second.
Method for estimating low-flow characteristics of ungaged streams in Indiana
Arihood, L.D.; Glatfelter, D.R.
1986-01-01
Equations for estimating the 7-day, 2-yr and 7-day, 10-yr low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow duration ratio, which is the 20% flow duration divided by the 90% flow duration. Flow duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from this plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low flow characteristics at 82 gaging stations where flow duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-yr and 7-day, 10-yr low flows are 19% and 28%. When flow duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46% and 61%. However, when stations with drainage areas < 10 sq mi are excluded from the test, the standard errors reduce to 38% and 49%. Standard errors increase when stations with small basins are included, probably because some of the flow duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and the central physiographic zones of the state. Low flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low flow characteristic can be adjusted. The method is most accurate for sites with drainage areas ranging from 10 to 1,000 sq mi and for predictions of 7-day, 10-yr low flows ranging from 0.5 to 340 cu ft/sec. (Author 's abstract)
Extended target recognition in cognitive radar networks.
Wei, Yimin; Meng, Huadong; Liu, Yimin; Wang, Xiqin
2010-01-01
We address the problem of adaptive waveform design for extended target recognition in cognitive radar networks. A closed-loop active target recognition radar system is extended to the case of a centralized cognitive radar network, in which a generalized likelihood ratio (GLR) based sequential hypothesis testing (SHT) framework is employed. Using Doppler velocities measured by multiple radars, the target aspect angle for each radar is calculated. The joint probability of each target hypothesis is then updated using observations from different radar line of sights (LOS). Based on these probabilities, a minimum correlation algorithm is proposed to adaptively design the transmit waveform for each radar in an amplitude fluctuation situation. Simulation results demonstrate performance improvements due to the cognitive radar network and adaptive waveform design. Our minimum correlation algorithm outperforms the eigen-waveform solution and other non-cognitive waveform design approaches.
Gul, Naheed; Quadri, Mujtaba
2011-09-01
To evaluate the clinical diagnostic reasoning process as a tool to decrease the number of unnecessary endoscopies for diagnosing peptic ulcer disease. tudy Cross-sectional KAP study. Shifa College of Medicine, Islamabad, from April to August 2010. Two hundred doctors were assessed with three common clinical scenarios of low, intermediate and high pre-test probability for peptic ulcer disease using a questionnaire. The differences between the reference estimates and the respondents' estimates of pre-test and post test probability were used for assessing the ability of estimating the pretest probability and the post test probability of the disease. Doctors were also enquired about the cost-effectiveness and safety of endoscopy. Consecutive sampling technique was used and the data was analyzed using SPSS version 16. In the low pre-test probability settings, overestimation of the disease probability suggested the doctors' inability to rule out the disease. The post test probabilities were similarly overestimated. In intermediate pre-test probability settings, both over and under estimation of probabilities were noticed. In high pre-test probability setting, there was no significant difference in the reference and the responders' intuitive estimates of post test probability. Doctors were more likely to consider ordering the test as the disease probability increased. Most respondents were of the opinion that endoscopy is not a cost-effective procedure and may be associated with a potential harm. Improvement is needed in doctors' diagnostic ability by more emphasis on clinical decision-making and application of bayesian probabilistic thinking to real clinical situations.
Disc displacement without reduction: a retrospective study of a clinical diagnostic sign.
Giraudeau, Anne; Jeany, Marion; Ehrmann, Elodie; Déjou, Jacques; Ouni, Imed; Orthlieb, Jean-Daniel
2017-03-01
The purpose of this retrospective study is to evaluate a clinical diagnostic sign for disc displacement without reduction (DDWR), the absence of additional condylar translation during opening compared with protrusion. Thirty-eight electronic axiographic and magnetic resonance imaging (MRI) examinations of the TMJ were analyzed in order to compare the opening/protrusion ratio of condylar translation between non-painful DDWR and non-DDWR. According to the Mann-Whitney U test, the opening/protrusion ratio in non-painful DDWR differs significantly from non-DDWR (p < 0.0001). Among non-painful DDWR, there is no additional condylar translation during opening in comparison with protrusion, and this is probably also the case for DDWR without limited opening, which is a subtype that has not been validated by the Diagnostic Criteria for Temporomandibular Disorders (DC/TMD). Comparative condylar palpation can analyze this sign, and therefore, further comparative investigations between MRI and clinical examination are needed to validate the corresponding clinical test.
Kim, Yong-Hyun; Shim, Wan-Joo; Kim, Myung-A; Hong, Kyung-Soon; Shin, Mi-Seung; Park, Seong-Mi; Cho, Kyoung Im; Kim, Mina; Kim, Sihun; Kim, Hak-Lyoung; Yoon, Hyun-Ju; Na, Jin-Oh; Kim, Sung-Eun
2016-06-01
Pretest probability (PTP) and an exercise treadmill test (ETT) are recommended for the initial evaluation of possible coronary artery disease (CAD), but the applicability of these tests in Korean women has not been evaluated. Korean women with PTP, ETT, and invasive coronary angiography results were enrolled. Across all PTP levels, PTP and ETT statistics were evaluated and independent CAD predictors obtained. Of the 335 patients (mean age 58.0 ± 10.2 years), 99 and 236 were in the low (LPTP) and intermediate PTP (IPTP) groups, respectively. The observed prevalence of CAD was significantly lower than the PTP. (7.1% vs. 9.1 ± 4.9% in LPTP, p < 0.001; 23.3% vs. 33.0 ± 15.1% in IPTP, p < 0.001) The ETT's sensitivity and positive predictive values (PPVs) appeared lower than previously reported (LPTP: 42.9% and 16.7%; IPTP: 61.8% and 37.0%), whereas the negative predictive values (NPVs) were higher (LPTP: 95.1%; IPTP: 85.4%). After multivariate adjustments, positive ETT (odds ratio 3.276, 95% confidence interval 1.643-6.532, p = 0.001) independently predicted the presence of CAD, but the PTP showed only marginal predictability (odds ratio 1.019, 95% confidence interval 0.998-1.041, p = 0.069). In Korean women, the observed prevalence of CAD was lower than the PTP, and PTP showed only marginal CAD predictability. Although a positive ETT independently predicted CAD, the ETT showed lower sensitivity and PPVs than previously reported. Despite the limited value of PTP and ETT, the high NPVs of ETT appear useful for saving patients from unnecessary further examinations.
Plumbaum, K; Volk, G F; Boeger, D; Buentzel, J; Esser, D; Steinbrecher, A; Hoffmann, K; Jecker, P; Mueller, A; Radtke, G; Witte, O W; Guntinas-Lichius, O
2017-12-01
To determine the inpatient management for patients with acute idiopathic facial palsy (IFP) in Thuringia, Germany. Population-based study. All inpatients with IFP in all hospitals with departments of otolaryngology and neurology in 2012, in the German federal state, Thuringia. Patients' characteristics and treatment were compared between departments, and the probability of recovery was tested. A total of 291 patients were mainly treated in departments of otolaryngology (55%) and neurology (36%). Corticosteroid treatment was the predominant therapy (84.5%). The probability to receive a facial nerve grading (odds ratio [OR=12.939; 95% confidence interval [CI]=3.599 to 46.516), gustatory testing (OR=6.878; CI=1.064 to 44.474) and audiometry (OR=32.505; CI=1.485 to 711.257) was significantly higher in otolaryngology departments, but lower for cranial CT (OR=0.192; CI=0.061 to 0.602), cerebrospinal fluid examination (OR=0.024; CI=0.006 to 0.102). A total of 131 patients (45%) showed a recovery to House-Brackmann grade≤II. A pathological stapedial reflex test (Hazard ratio [HR]=0.416; CI=0.180 to 0.959) was the only independent diagnostic predictor of worse outcome. Prednisolone dose >500 mg (HR=0.579; CI 0.400 to 0.838) and no adjuvant physiotherapy (HR=0.568; CI=0.407 to 0.794) were treatment-related predictors of worse outcome. Inpatient treatment of IFP seems to be highly variable in daily practice, partly depending on the treating discipline and despite the availability of evidence-based guidelines. The population-based recovery rate was worse than reported in clinical trials. © 2017 John Wiley & Sons Ltd.
Some ideas and opportunities concerning three-dimensional wind-tunnel wall corrections
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1982-01-01
Opportunities for improving the accuracy and reliability of wall corrections in conventional ventilated test sections are presented. The approach encompasses state-of-the-art technology in transonic computational methods combined with the measurement of tunnel-wall pressures. The objective is to arrive at correction procedures of known, verifiable accuracy that are practical within a production testing environment. It is concluded that: accurate and reliable correction procedures can be developed for cruise-type aerodynamic testing for any wall configuration; passive walls can be optimized for minimal interference for cruise-type aerodynamic testing (tailored slots, variable open area ratio, etc.); monitoring and assessment of noncorrectable interference (buoyancy and curvature in a transonic stream) can be an integral part of a correction procedure; and reasonably good correction procedures can probably be developd for complex flows involving extensive separation and other unpredictable phenomena.
Internal Medicine residents use heuristics to estimate disease probability
Phang, Sen Han; Ravani, Pietro; Schaefer, Jeffrey; Wright, Bruce; McLaughlin, Kevin
2015-01-01
Background Training in Bayesian reasoning may have limited impact on accuracy of probability estimates. In this study, our goal was to explore whether residents previously exposed to Bayesian reasoning use heuristics rather than Bayesian reasoning to estimate disease probabilities. We predicted that if residents use heuristics then post-test probability estimates would be increased by non-discriminating clinical features or a high anchor for a target condition. Method We randomized 55 Internal Medicine residents to different versions of four clinical vignettes and asked them to estimate probabilities of target conditions. We manipulated the clinical data for each vignette to be consistent with either 1) using a representative heuristic, by adding non-discriminating prototypical clinical features of the target condition, or 2) using anchoring with adjustment heuristic, by providing a high or low anchor for the target condition. Results When presented with additional non-discriminating data the odds of diagnosing the target condition were increased (odds ratio (OR) 2.83, 95% confidence interval [1.30, 6.15], p = 0.009). Similarly, the odds of diagnosing the target condition were increased when a high anchor preceded the vignette (OR 2.04, [1.09, 3.81], p = 0.025). Conclusions Our findings suggest that despite previous exposure to the use of Bayesian reasoning, residents use heuristics, such as the representative heuristic and anchoring with adjustment, to estimate probabilities. Potential reasons for attribute substitution include the relative cognitive ease of heuristics vs. Bayesian reasoning or perhaps residents in their clinical practice use gist traces rather than precise probability estimates when diagnosing. PMID:27004080
Olsen, Morten; Hjortdal, Vibeke E; Mortensen, Laust H; Christensen, Thomas D; Sørensen, Henrik T; Pedersen, Lars
2011-04-01
Congenital heart defect patients may experience neurodevelopmental impairment. We investigated their educational attainments from basic schooling to higher education. Using administrative databases, we identified all Danish patients with a cardiac defect diagnosis born from 1 January, 1977 to 1 January, 1991 and alive at age 13 years. As a comparison cohort, we randomly sampled 10 persons per patient. We obtained information on educational attainment from Denmark's Database for Labour Market Research. The study population was followed until achievement of educational levels, death, emigration, or 1 January, 2006. We estimated the hazard ratio of attaining given educational levels, conditional on completing preceding levels, using discrete-time Cox regression and adjusting for socio-economic factors. Analyses were repeated for a sub-cohort of patients and controls born at term and without extracardiac defects or chromosomal anomalies. We identified 2986 patients. Their probability of completing compulsory basic schooling was approximately 10% lower than that of control individuals (adjusted hazard ratio = 0.79, ranged from 0.75 to 0.82 0.79; 95% confidence interval: 0.75-0.82). Their subsequent probability of completing secondary school was lower than that of the controls, both for all patients (adjusted hazard ratio = 0.74; 95% confidence interval: 0.69-0.80) and for the sub-cohort (adjusted hazard ratio = 0.80; 95% confidence interval: 0.73-0.86). The probability of attaining a higher degree, conditional on completion of youth education, was affected both for all patients (adjusted hazard ratio = 0.88; 95% confidence interval: 0.76-1.01) and for the sub-cohort (adjusted hazard ratio = 0.92; 95% confidence interval: 0.79-1.07). The probability of educational attainment was reduced among long-term congenital heart defect survivors.
Mars Flyer Rocket Propulsion Risk Assessment: ARC Testing
NASA Technical Reports Server (NTRS)
2001-01-01
This report describes the investigation of a 10-N, bipropellant thruster, operating at -40 C, with monomethy1hydrazine (MMH) and 25% nitric oxide in nitrogen tetroxide (MON-25). The thruster testing was conducted as part of a risk reduction activity for the Mars Flyer, a proposed mission to fly a miniature airplane in the Martian atmosphere. Testing was conducted using an existing thruster, designed for MMH and MON-3 propellants. MON-25 oxidizer was successfully manufactured from MON-3 by the addition of nitric oxide. The thruster was operated successfully over a range of propellant temperatures (-40 to 21 C and feed pressures (6.9 to 20.7 kPa). The thruster hardware was always equal or lower than the propellant temperature. Most tests were 30- and 60-second durations, with 600- and 1200-second duration and pulse testing also conducted. When operating at -40 C, the mixture ratio of the thruster shifted from the nominal value of 1.65 to about 1.85, probably caused by an increase in MMH viscosity, with a corresponding reduction in MMH flowrate. Specific impulse at - 40 C (at nominal feed pressures) was 267 sec, while performance was 277 sec at 21 C. This difference in performance was due, in part, to the mixture ratio shift.
Yamamoto, M; Oikawa, S; Sakaguchi, A; Tomita, J; Hoshi, M; Apsalikov, K N
2008-09-01
Information on the 240Pu/239Pu isotope ratios in human tissues for people living around the Semipalatinsk Nuclear Test Site (SNTS) was deduced from 9 sets of soft tissues and bones, and 23 other bone samples obtained by autopsy. Plutonium was radiochemically separated and purified, and plutonium isotopes (239Pu and 240Pu) were determined by sector-field high resolution inductively coupled plasma-mass spectrometry. For most of the tissue samples from the former nine subjects, low 240Pu/239Pu isotope ratios were determined: bone, 0.125 +/- 0.018 (0.113-0.145, n = 4); lungs, 0.063 +/- 0.010 (0.051-0.078, n = 5); and liver, 0.148 +/- 0.026 (0.104-0.189, n = 9). Only 239Pu was detected in the kidney samples; the amount of 240Pu was too small to be measured, probably due to the small size of samples analyzed. The mean 240Pu/239Pu isotope ratio for bone samples from the latter 23 subjects was 0.152 +/- 0.034, ranging from 0.088 to 0.207. A significant difference (a two-tailed Student's t test; 95% significant level, alpha = 0.05) between mean 240Pu/239Pu isotope ratios for the tissue samples and for the global fallout value (0.178 +/- 0.014) indicated that weapons-grade plutonium from the atomic bombs has been incorporated into the human tissues, especially lungs, in the residents living around the SNTS. The present 239,240Pu concentrations in bone, lung, and liver samples were, however, not much different from ranges found for human tissues from other countries that were due solely to global fallout during the 1970's-1980's.
Neely, J H; Keefe, D E; Ross, K L
1989-11-01
In semantic priming paradigms for lexical decisions, the probability that a word target is semantically related to its prime (the relatedness proportion) has been confounded with the probability that a target is a nonword, given that it is unrelated to its prime (the nonword ratio). This study unconfounded these two probabilities in a lexical decision task with category names as primes and with high- and low-dominance exemplars as targets. Semantic priming for high-dominance exemplars was modulated by the relatedness proportion and, to a lesser degree, by the nonword ratio. However, the nonword ratio exerted a stronger influence than did the relatedness proportion on semantic priming for low-dominance exemplars and on the nonword facilitation effect (i.e., the superiority in performance for nonword targets that follow a category name rather than a neutral XXX prime). These results suggest that semantic priming for lexical decisions is affected by both a prospective prime-generated expectancy, modulated by the relatedness proportion, and a retrospective target/prime semantic matching process, modulated by the nonword ratio.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Kill ratio calculation for in-line yield prediction
NASA Astrophysics Data System (ADS)
Lorenzo, Alfonso; Oter, David; Cruceta, Sergio; Valtuena, Juan F.; Gonzalez, Gerardo; Mata, Carlos
1999-04-01
The search for better yields in IC manufacturing calls for a smarter use of the vast amount of data that can be generated by a world class production line.In this scenario, in-line inspection processes produce thousands of wafer maps, number of defects, defect type and pictures every day. A step forward is to correlate these with the other big data- generator area: test. In this paper, we present how these data can be put together and correlated to obtain a very useful yield predicting tool. This correlation will first allow us to calculate the kill ratio, i.e. the probability for a defect of a certain size in a certain layer to kill the die. Then we will use that number to estimate the cosmetic yield that a wafer will have.
Psychopathology among New York city public school children 6 months after September 11.
Hoven, Christina W; Duarte, Cristiane S; Lucas, Christopher P; Wu, Ping; Mandell, Donald J; Goodwin, Renee D; Cohen, Michael; Balaban, Victor; Woodruff, Bradley A; Bin, Fan; Musa, George J; Mei, Lori; Cantor, Pamela A; Aber, J Lawrence; Cohen, Patricia; Susser, Ezra
2005-05-01
Children exposed to a traumatic event may be at higher risk for developing mental disorders. The prevalence of child psychopathology, however, has not been assessed in a population-based sample exposed to different levels of mass trauma or across a range of disorders. To determine prevalence and correlates of probable mental disorders among New York City, NY, public school students 6 months following the September 11, 2001, World Trade Center attack. Survey. New York City public schools. A citywide, random, representative sample of 8236 students in grades 4 through 12, including oversampling in closest proximity to the World Trade Center site (ground zero) and other high-risk areas. Children were screened for probable mental disorders with the Diagnostic Interview Schedule for Children Predictive Scales. One or more of 6 probable anxiety/depressive disorders were identified in 28.6% of all children. The most prevalent were probable agoraphobia (14.8%), probable separation anxiety (12.3%), and probable posttraumatic stress disorder (10.6%). Higher levels of exposure correspond to higher prevalence for all probable anxiety/depressive disorders. Girls and children in grades 4 and 5 were the most affected. In logistic regression analyses, child's exposure (adjusted odds ratio, 1.62), exposure of a child's family member (adjusted odds ratio, 1.80), and the child's prior trauma (adjusted odds ratio, 2.01) were related to increased likelihood of probable anxiety/depressive disorders. Results were adjusted for different types of exposure, sociodemographic characteristics, and child mental health service use. A high proportion of New York City public school children had a probable mental disorder 6 months after September 11, 2001. The data suggest that there is a relationship between level of exposure to trauma and likelihood of child anxiety/depressive disorders in the community. The results support the need to apply wide-area epidemiological approaches to mental health assessment after any large-scale disaster.
Statistical Performance Evaluation Of Soft Seat Pressure Relief Valves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Stephen P.; Gross, Robert E.
2013-03-26
Risk-based inspection methods enable estimation of the probability of failure on demand for spring-operated pressure relief valves at the United States Department of Energy's Savannah River Site in Aiken, South Carolina. This paper presents a statistical performance evaluation of soft seat spring operated pressure relief valves. These pressure relief valves are typically smaller and of lower cost than hard seat (metal to metal) pressure relief valves and can provide substantial cost savings in fluid service applications (air, gas, liquid, and steam) providing that probability of failure on demand (the probability that the pressure relief valve fails to perform its intendedmore » safety function during a potentially dangerous over pressurization) is at least as good as that for hard seat valves. The research in this paper shows that the proportion of soft seat spring operated pressure relief valves failing is the same or less than that of hard seat valves, and that for failed valves, soft seat valves typically have failure ratios of proof test pressure to set pressure less than that of hard seat valves.« less
Measurement of Two- and Three-Nucleon Short-Range Correlation Probabilities in Nuclei
NASA Astrophysics Data System (ADS)
Egiyan, K. S.; Dashyan, N. B.; Sargsian, M. M.; Strikman, M. I.; Weinstein, L. B.; Adams, G.; Ambrozewicz, P.; Anghinolfi, M.; Asavapibhop, B.; Asryan, G.; Avakian, H.; Baghdasaryan, H.; Baillie, N.; Ball, J. P.; Baltzell, N. A.; Batourine, V.; Battaglieri, M.; Bedlinskiy, I.; Bektasoglu, M.; Bellis, M.; Benmouna, N.; Biselli, A. S.; Bonner, B. E.; Bouchigny, S.; Boiarinov, S.; Bradford, R.; Branford, D.; Brooks, W. K.; Bültmann, S.; Burkert, V. D.; Bultuceanu, C.; Calarco, J. R.; Careccia, S. L.; Carman, D. S.; Carnahan, B.; Chen, S.; Cole, P. L.; Coltharp, P.; Corvisiero, P.; Crabb, D.; Crannell, H.; Cummings, J. P.; Sanctis, E. De; Devita, R.; Degtyarenko, P. V.; Denizli, H.; Dennis, L.; Dharmawardane, K. V.; Djalali, C.; Dodge, G. E.; Donnelly, J.; Doughty, D.; Dragovitsch, P.; Dugger, M.; Dytman, S.; Dzyubak, O. P.; Egiyan, H.; Elouadrhiri, L.; Empl, A.; Eugenio, P.; Fatemi, R.; Fedotov, G.; Feuerbach, R. J.; Forest, T. A.; Funsten, H.; Gavalian, G.; Gevorgyan, N. G.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Goetz, J. T.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guillo, M.; Guler, N.; Guo, L.; Gyurjyan, V.; Hadjidakis, C.; Hardie, J.; Hersman, F. W.; Hicks, K.; Hleiqawi, I.; Holtrop, M.; Hu, J.; Huertas, M.; Hyde-Wright, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Ito, M. M.; Jenkins, D.; Jo, H. S.; Joo, K.; Juengst, H. G.; Kellie, J. D.; Khandaker, M.; Kim, K. Y.; Kim, K.; Kim, W.; Klein, A.; Klein, F. J.; Klimenko, A.; Klusman, M.; Kramer, L. H.; Kubarovsky, V.; Kuhn, J.; Kuhn, S. E.; Kuleshov, S.; Lachniet, J.; Laget, J. M.; Langheinrich, J.; Lawrence, D.; Lee, T.; Livingston, K.; Maximon, L. C.; McAleer, S.; McKinnon, B.; McNabb, J. W.; Mecking, B. A.; Mestayer, M. D.; Meyer, C. A.; Mibe, T.; Mikhailov, K.; Minehart, R.; Mirazita, M.; Miskimen, R.; Mokeev, V.; Morrow, S. A.; Mueller, J.; Mutchler, G. S.; Nadel-Turonski, P.; Napolitano, J.; Nasseripour, R.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Niczyporuk, B. B.; Niyazov, R. A.; O'Rielly, G. V.; Osipenko, M.; Ostrovidov, A. I.; Park, K.; Pasyuk, E.; Peterson, C.; Pierce, J.; Pivnyuk, N.; Pocanic, D.; Pogorelko, O.; Polli, E.; Pozdniakov, S.; Preedom, B. M.; Price, J. W.; Prok, Y.; Protopopescu, D.; Qin, L. M.; Raue, B. A.; Riccardi, G.; Ricco, G.; Ripani, M.; Ritchie, B. G.; Ronchetti, F.; Rosner, G.; Rossi, P.; Rowntree, D.; Rubin, P. D.; Sabatié, F.; Salgado, C.; Santoro, J. P.; Sapunenko, V.; Schumacher, R. A.; Serov, V. S.; Sharabian, Y. G.; Shaw, J.; Smith, E. S.; Smith, L. C.; Sober, D. I.; Stavinsky, A.; Stepanyan, S.; Stokes, B. E.; Stoler, P.; Strauch, S.; Suleiman, R.; Taiuti, M.; Taylor, S.; Tedeschi, D. J.; Thompson, R.; Tkabladze, A.; Tkachenko, S.; Todor, L.; Tur, C.; Ungaro, M.; Vineyard, M. F.; Vlassov, A. V.; Weygand, D. P.; Williams, M.; Wolin, E.; Wood, M. H.; Yegneswaran, A.; Yun, J.; Zana, L.; Zhang, J.
2006-03-01
The ratios of inclusive electron scattering cross sections of 4He, 12C, and 56Fe to 3He have been measured at 1
Ye, Meng; Huang, Tao; Ying, Ying; Li, Jinyun; Yang, Ping; Ni, Chao; Zhou, Chongchang; Chen, Si
2017-01-01
As a tumor suppressor gene, 14-3-3 σ has been reported to be frequently methylated in breast cancer. However, the clinical effect of 14-3-3 σ promoter methylation remains to be verified. This study was performed to assess the clinicopathological significance and diagnostic value of 14-3-3 σ promoter methylation in breast cancer. 14-3-3 σ promoter methylation was found to be notably higher in breast cancer than in benign lesions and normal breast tissue samples. We did not observe that 14-3-3 σ promoter methylation was linked to the age status, tumor grade, clinic stage, lymph node status, histological subtype, ER status, PR status, HER2 status, or overall survival of patients with breast cancer. The combined sensitivity, specificity, AUC (area under the curve), positive likelihood ratios (PLR), negative likelihood ratios (NLR), diagnostic odds ratio (DOR), and post-test probability values (if the pretest probability was 30%) of 14-3-3 σ promoter methylation in blood samples of breast cancer patients vs. healthy subjects were 0.69, 0.99, 0.86, 95, 0.31, 302, and 98%, respectively. Our findings suggest that 14-3-3 σ promoter methylation may be associated with the carcinogenesis of breast cancer and that the use of 14-3-3 σ promoter methylation might represent a useful blood-based biomarker for the clinical diagnosis of breast cancer. PMID:27999208
Kim, Chong H; Simmons, Sierra C; Williams, Lance A; Staley, Elizabeth M; Zheng, X Long; Pham, Huy P
2017-11-01
The ADAMTS13 test distinguishes thrombotic thrombocytopenic purpura (TTP) from other thrombotic microangiopathies (TMAs). The PLASMIC score helps determine the pretest probability of ADAMTS13 deficiency. Due to inherent limitations of both tests, and potential adverse effects and cost of unnecessary treatments, we performed a cost-effectiveness analysis (CEA) investigating the benefits of incorporating an in-hospital ADAMTS13 test and/or PLASMIC score into our clinical practice. A CEA model was created to compare four scenarios for patients with TMAs, utilizing either an in-house or a send-out ADAMTS13 assay with or without prior risk stratification using PLASMIC scoring. Model variables, including probabilities and costs, were gathered from the medical literature, except for the ADAMTS13 send-out and in-house tests, which were obtained from our institutional data. If only the cost is considered, in-house ADAMTS13 test for patients with intermediate- to high-risk PLASMIC score is the least expensive option ($4,732/patient). If effectiveness is assessed as measured by the number of averted deaths, send-out ADAMTS13 test is the most effective. Considering the cost/effectiveness ratio, the in-house ADAMTS13 test in patients with intermediate- to high-risk PLASMIC score is the best option, followed by the in-house ADAMTS13 test without the PLASMIC score. In patients with clinical presentations of TMAs, having an in-hospital ADAMTS13 test to promptly establish the diagnosis of TTP appears to be cost-effective. Utilizing the PLASMIC score further increases the cost-effectiveness of the in-house ADAMTS13 test. Our findings indicate the benefit of having a rapid and reliable in-house ADAMTS13 test, especially in the tertiary medical center. © 2017 AABB.
NASA Astrophysics Data System (ADS)
Ricotti, Leonardo; das Neves, Ricardo Pires; Ciofani, Gianni; Canale, Claudio; Nitti, Simone; Mattoli, Virgilio; Mazzolai, Barbara; Ferreira, Lino; Menciassi, Arianna
2014-02-01
F/G-actin ratio modulation is known to have an important role in many cell functions and in the regulation of specific cell behaviors. Several attempts have been made in the latest decades to finely control actin production and polymerization, in order to promote certain cell responses. In this paper we demonstrate the possibility of modulating F/G-actin ratio and mechanical properties of normal human dermal fibroblasts by using boron nitride nanotubes dispersed in the culture medium and by stimulating them with ultrasound transducers. Increasing concentrations of nanotubes were tested with the cells, without any evidence of cytotoxicity up to 10 μg/ml concentration of nanoparticles. Cells treated with nanoparticles and ultrasound stimulation showed a significantly higher F/G-actin ratio in comparison with the controls, as well as a higher Young's modulus. Assessment of Cdc42 activity revealed that actin nucleation/polymerization pathways, involving Rho GTPases, are probably influenced by nanotube-mediated stimulation, but they do not play a primary role in the significant increase of F/G-actin ratio of treated cells, such effect being mainly due to actin overexpression.
Zoratto, Francesca; Laviola, Giovanni; Adriani, Walter
2016-03-23
Interest is rising for animal modelling of Gambling disorder (GD), which is rapidly emerging as a mental health concern. In the present study, we assessed gambling proneness in male Wistar-Han rats using the "Probabilistic Delivery Task" (PDT). This operant protocol is based on choice between either certain, small amounts of food (SS) or larger amounts of food (LLL) delivered (or not) depending on a given (and progressively decreasing) probability. Here, we manipulated the ratio between large and small reward size to assess the impact of different magnitudes on rats' performance. Specifically, we drew a comparison between threefold (2 vs 6 pellets) and fivefold (1 vs 5 pellets) sizes. As a consequence, the "indifferent point" (IP, at which either choice is mathematically equivalent in terms of total foraging) was at 33% and 20% probability of delivery, respectively. Animals tested with the sharper contrast (i.e. fivefold ratio) exhibited sustained preference for LLL far beyond the IP, despite high uncertainty and low payoff, which rendered LLL a sub-optimal option. By contrast, animals facing a slighter contrast (i.e. threefold ratio) were increasingly disturbed by progressive rarefaction of rewards, as expressed by enhanced inadequate nose-poking: this was in accordance with their prompt shift in preference to SS, already shown around the IP. In conclusion, a five-folded LLL-to-SS ratio was not only more attractive, but also less frustrating than a three-folded one. Thus, a profile of gambling proneness in the PDT is more effectively induced by marked contrast between alternative options. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Clinical decision-making: predictors of patient participation in nursing care.
Florin, Jan; Ehrenberg, Anna; Ehnfors, Margareta
2008-11-01
To investigate predictors of patients' preferences for participation in clinical decision-making in inpatient nursing care. Patient participation in decision-making in nursing care is regarded as a prerequisite for good clinical practice regarding the person's autonomy and integrity. A cross-sectional survey of 428 persons, newly discharged from inpatient care. The survey was conducted using the Control Preference Scale. Multiple logistic regression analysis was used for testing the association of patient characteristics with preferences for participation. Patients, in general, preferred adopting a passive role. However, predictors for adopting an active participatory role were the patient's gender (odds ratio = 1.8), education (odds ratio = 2.2), living condition (odds ratio = 1.8) and occupational status (odds ratio = 2.0). A probability of 53% was estimated, which female senior citizens with at least a high school degree and who lived alone would prefer an active role in clinical decision-making. At the same time, a working cohabiting male with less than a high school degree had a probability of 8% for active participation in clinical decision making in nursing care. Patient preferences for participation differed considerably and are best elicited by assessment of the individual patient. Relevance to clinical practice. The nurses have a professional responsibility to act in such a way that patients can participate and make decisions according to their own values from an informed position. Access to knowledge of patients'basic assumptions and preferences for participation is of great value for nurses in the care process. There is a need for nurses to use structured methods and tools for eliciting individual patient preferences regarding participation in clinical decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyland, J.L.; Dolah, R.F. van; Snoots, T.R.
1999-11-01
Matching data on sediment contaminants (metals, PAHs, PCBs) and macroinfaunal community structure from 231 subtidal stations in southeastern US estuaries were used to develop a framework for evaluating risks of benthic impacts from multiple-contaminant exposure. Sediment contamination was expressed as the mean ratio of individual contaminant concentrations relative to corresponding sediment quality guidelines, that is, to effects range-median (ERM) values, probable effects level (PEL) values, or an aggregate of the two. The probability of a degraded benthos was relatively low in samples with mean ERM quotients {le}0.020, PEL quotients {le}0.035, or combined ERM/PEL quotients {le}0.024. Only 5% of stations withinmore » these ranges had degraded benthic assemblages, while 95% had healthy assemblages. A higher probability of benthic impacts was observed in samples with mean ERM quotients >0.058, PEL quotients >0.096, or ERM/PEL quotients >0.077. Seventy-three to 78% of stations with values in these upper ranges had degraded benthic assemblages, while 22 to 27% had healthy assemblages. Only four stations (three with degraded, one with healthy assemblages) had mean ERM or PEL quotients >1.0, which is the beginning of the range associated with a high probability of mortality in short-term laboratory toxicity tests using amphipods.« less
Spyropoulos, Evangelos; Kotsiris, Dimitrios; Spyropoulos, Katherine; Panagopoulos, Aggelos; Galanakis, Ioannis; Mavrikos, Stamatios
2017-02-01
We developed a mathematical "prostate cancer (PCa) conditions simulating" predictive model (PCP-SMART), from which we derived a novel PCa predictor (prostate cancer risk determinator [PCRD] index) and a PCa risk equation. We used these to estimate the probability of finding PCa on prostate biopsy, on an individual basis. A total of 371 men who had undergone transrectal ultrasound-guided prostate biopsy were enrolled in the present study. Given that PCa risk relates to the total prostate-specific antigen (tPSA) level, age, prostate volume, free PSA (fPSA), fPSA/tPSA ratio, and PSA density and that tPSA ≥ 50 ng/mL has a 98.5% positive predictive value for a PCa diagnosis, we hypothesized that correlating 2 variables composed of 3 ratios (1, tPSA/age; 2, tPSA/prostate volume; and 3, fPSA/tPSA; 1 variable including the patient's tPSA and the other, a tPSA value of 50 ng/mL) could operate as a PCa conditions imitating/simulating model. Linear regression analysis was used to derive the coefficient of determination (R 2 ), termed the PCRD index. To estimate the PCRD index's predictive validity, we used the χ 2 test, multiple logistic regression analysis with PCa risk equation formation, calculation of test performance characteristics, and area under the receiver operating characteristic curve analysis using SPSS, version 22 (P < .05). The biopsy findings were positive for PCa in 167 patients (45.1%) and negative in 164 (44.2%). The PCRD index was positively signed in 89.82% positive PCa cases and negative in 91.46% negative PCa cases (χ 2 test; P < .001; relative risk, 8.98). The sensitivity was 89.8%, specificity was 91.5%, positive predictive value was 91.5%, negative predictive value was 89.8%, positive likelihood ratio was 10.5, negative likelihood ratio was 0.11, and accuracy was 90.6%. Multiple logistic regression revealed the PCRD index as an independent PCa predictor, and the formulated risk equation was 91% accurate in predicting the probability of finding PCa. On the receiver operating characteristic analysis, the PCRD index (area under the curve, 0.926) significantly (P < .001) outperformed other, established PCa predictors. The PCRD index effectively predicted the prostate biopsy outcome, correctly identifying 9 of 10 men who were eventually diagnosed with PCa and correctly ruling out PCa for 9 of 10 men who did not have PCa. Its predictive power significantly outperformed established PCa predictors, and the formulated risk equation accurately calculated the probability of finding cancer on biopsy, on an individual patient basis. Copyright © 2016 Elsevier Inc. All rights reserved.
Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time
NASA Astrophysics Data System (ADS)
Smith, James F.
2017-03-01
A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.
Christiansen, P; Schlosser, A; Henriksen, O
1995-01-01
The fully relaxed water signal was used as an internal standard in a STEAM experiment to calculate the concentrations of the metabolites: N-acetylaspartate (NAA), creatine + phosphocreatine [Cr + PCr], and choline-containing metabolites (Cho) in the frontal part of the brain in 12 patients with probable Alzheimer's disease. Eight age-matched healthy volunteers served as controls. Furthermore, T1 and T2 relaxation times of the metabolites and signal ratios: NAA/Cho, NAA/[Cr + PCr], and [Cr + PCr]/Cho at four different echo times (TE) and two different repetition times (TR) were calculated. The experiments were carried out using a Siemens Helicon SP 63/84 wholebody MR-scanner at 1.5 T. The concentration of NAA was significantly lower in the patients with probable Alzheimer's disease than in the healthy volunteers. No significant difference was found for any other metabolite concentration. For the signal ratios the only statistically significant difference was that the NAA/Cho ratio at TE = 92 ms and TR = 1.6 s was lower in the patients with probable Alzheimer's disease compared with the control group. A trend towards a longer T2 relaxation time for NAA in the patients with probable Alzheimer's disease than among the healthy volunteers was found, but no significant difference was found concerning the T1 and T2 relaxation times.
Barnett, L A; Lewis, M; Mallen, C D; Peat, G
2017-12-04
Selection bias is a concern when designing cluster randomised controlled trials (c-RCT). Despite addressing potential issues at the design stage, bias cannot always be eradicated from a trial design. The application of bias analysis presents an important step forward in evaluating whether trial findings are credible. The aim of this paper is to give an example of the technique to quantify potential selection bias in c-RCTs. This analysis uses data from the Primary care Osteoarthritis Screening Trial (POST). The primary aim of this trial was to test whether screening for anxiety and depression, and providing appropriate care for patients consulting their GP with osteoarthritis would improve clinical outcomes. Quantitative bias analysis is a seldom-used technique that can quantify types of bias present in studies. Due to lack of information on the selection probability, probabilistic bias analysis with a range of triangular distributions was also used, applied at all three follow-up time points; 3, 6, and 12 months post consultation. A simple bias analysis was also applied to the study. Worse pain outcomes were observed among intervention participants than control participants (crude odds ratio at 3, 6, and 12 months: 1.30 (95% CI 1.01, 1.67), 1.39 (1.07, 1.80), and 1.17 (95% CI 0.90, 1.53), respectively). Probabilistic bias analysis suggested that the observed effect became statistically non-significant if the selection probability ratio was between 1.2 and 1.4. Selection probability ratios of > 1.8 were needed to mask a statistically significant benefit of the intervention. The use of probabilistic bias analysis in this c-RCT suggested that worse outcomes observed in the intervention arm could plausibly be attributed to selection bias. A very large degree of selection of bias was needed to mask a beneficial effect of intervention making this interpretation less plausible.
School illness absenteeism during 2009 influenza A (H1N1) pandemic--South Dakota, 2009-2010.
Kightlinger, Lon; Horan, Vickie
2013-05-01
Schools are important amplification settings of influenza virus transmission. We demonstrated correlation of school absenteeism (due to any illness) with other influenza A (H1N1) activity surveillance data during the 2009 pandemic. We collected nonspecific illness student absenteeism data from August 17, 2009 through April 3, 2010 from 187 voluntarily participating South Dakota schools using weekly online surveys. Relative risks (RR) were calculated as the ratio of the probability of absenteeism during elevated weeks versus the probability of absenteeism during the baseline weeks (RR = 1.89). We used Pearson correlation to associate absenteeism with laboratory-confirmed influenza cases, influenza cases diagnosed by rapid tests, influenza-associated hospitalizations and deaths reported in South Dakota during the 2009 H1N1 pandemic period. School-absenteeism data correlated strongly with data from these other influenza surveillance sources.
Li, Xia; Kearney, Patricia M; Keane, Eimear; Harrington, Janas M; Fitzgerald, Anthony P
2017-06-01
The aim of this study was to explore levels and sociodemographic correlates of physical activity (PA) over 1 week using accelerometer data. Accelerometer data was collected over 1 week from 1075 8-11-year-old children in the cross-sectional Cork Children's Lifestyle Study. Threshold values were used to categorise activity intensity as sedentary, light, moderate or vigorous. Questionnaires collected data on demographic factors. Smoothed curves were used to display minute by minute variations. Binomial regression was used to identify factors correlated with the probability of meeting WHO 60 min moderate to vigorous PA guidelines. Overall, 830 children (mean (SD) age: 9.9(0.7) years, 56.3% boys) were included. From the binomial multiple regression analysis, boys were found more likely to meet guidelines (probability ratio 1.17, 95% CI 1.06 to 1.28) than girls. Older children were less likely to meet guidelines than younger children (probability ratio 0.91, CI 0.87 to 0.95). Normal weight children were more likely than overweight and obese children to meet guidelines (probability ratio 1.25, CI 1.16 to 1.34). Children in urban areas were more likely to meet guidelines than those in rural areas (probability ratio 1.19, CI 1.07 to 1.33). Longer daylight length days were associated with greater probability of meeting guidelines compared to shorter daylight length days. PA levels differed by individual factors including age, gender and weight status as well as by environmental factors including residence and daylight length. Less than one-quarter of children (26.8% boys, 16.2% girls) meet guidelines. Effective intervention policies are urgently needed to increase PA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Chen, I-Chun; Lee, Ming-Huei; Lin, Hsuan-Hung; Wu, Shang-Liang; Chang, Kun-Min; Lin, Hsiu-Ying
2017-05-01
Interstitial cystitis/bladder pain syndrome (IC/BPS) has several well-known comorbid psychiatric manifestations, including insomnia, anxiety, and depression. We hypothesized that somatoform disorder, which is a psychosomatic disease, can be used as a sensitive psychiatric phenotype of IC/BPS. We investigated whether somatoform disorder increases the risk of IC/BPS.A nested case-control study and a retrospective cohort study were followed up over a 12-year period (2002-2013) in the Taiwan Health Insurance Reimbursement Database. In the nested case-control study, 1612 patients with IC/BPS were matched in a 1:2 ratio to 3224 controls based on propensity scores. The odds ratio for somatoform disorder was calculated using conditional logistic regression analysis. In the retrospective cohort study, 1436 patients with somatoform disorder were matched in a 1:2 ratio to 2872 patients with nonsomatoform disorder based on propensity scores. Cox regression analysis was used to estimate the hazard ratio associated with the development of IC/BPS in patients with somatoform disorder, and the cumulative survival probability was tested using the Kaplan-Meier analysis.We found that the odds ratio for somatoform disorder was 2.46 (95% confidence interval [CI], 1.05-5.76). Although the average time until IC/BPS development in the control subjects was 11.5 ± 1.3 years, this interval was shorter in patients with somatoform disorder (6.3 ± 3.6 years). The hazard ratio for developing IC/BPS was 2.50 (95% CI 1.23-5.58); the adjusted hazard ratio was 2.26 (95% CI 1.002-5.007). The patients and controls also differed significantly in their cumulative survival probability for IC/BPS (log rank P < .05).Evidence from the nested case-control study and retrospective cohort study consistently indicated that somatoform disorder increases the risk for IC/BPS. Our study suggests that somatoform disorder can be used as a sensitive psychiatric phenotype to predict IC/BPS. Any past history of somatoform disorder should be documented while examining patients with IC/BPS.
Testing manifest monotonicity using order-constrained statistical inference.
Tijmstra, Jesper; Hessen, David J; van der Heijden, Peter G M; Sijtsma, Klaas
2013-01-01
Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores, such as the restscore, a single item score, and in some cases the total score. In this study, we show that manifest monotonicity can be tested by means of the order-constrained statistical inference framework. We propose a procedure that uses this framework to determine whether manifest monotonicity should be rejected for specific items. This approach provides a likelihood ratio test for which the p-value can be approximated through simulation. A simulation study is presented that evaluates the Type I error rate and power of the test, and the procedure is applied to empirical data.
Wind tunnel tests on a tail-less swept wing span-distributed cargo aircraft configuration
NASA Technical Reports Server (NTRS)
Rao, D. M.; Huffman, J. K.
1978-01-01
The configuration consisted of a 30 deg -swept, untapered, untwisted wing utilizing a low-moment cambered airfoil of 20 percent streamwise thickness designed for low wave drag at M = 0.6, C sub L = 0.4. The tests covered a range of Mach numbers 0.3 to 0.725 and chord Reynolds number 1,100,000 to 2,040,000, angles of attack up to model buffet and sideslip angles + or - 4 deg. Configuration build up, wing pod filleting, airfoil modification and trailing edge control deflection effects were briefly investigated. Three wing tip vertical tail designs were also tested. Wing body filleting and a simple airfoil modification both produced increments to maximum lift/drag ratio. Addition of pods eliminated pitch instability of the basic wing. While the magnitude of these benefits probably was Reynolds number sensitive, they underline the potential for improving the aerodynamics of the present configuration. The cruise parameter (product of Mach number and lift/drag ratio) attained a maximum close to the airfoil design point. The configuration was found to be positively stable with normal control effectiveness about all three axes in the Mach number and C sub L range of interest.
Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos
2011-06-30
It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.
Stochastic mechanics of loose boundary particle transport in turbulent flow
NASA Astrophysics Data System (ADS)
Dey, Subhasish; Ali, Sk Zeeshan
2017-05-01
In a turbulent wall shear flow, we explore, for the first time, the stochastic mechanics of loose boundary particle transport, having variable particle protrusions due to various cohesionless particle packing densities. The mean transport probabilities in contact and detachment modes are obtained. The mean transport probabilities in these modes as a function of Shields number (nondimensional fluid induced shear stress at the boundary) for different relative particle sizes (ratio of boundary roughness height to target particle diameter) and shear Reynolds numbers (ratio of fluid inertia to viscous damping) are presented. The transport probability in contact mode increases with an increase in Shields number attaining a peak and then decreases, while that in detachment mode increases monotonically. For the hydraulically transitional and rough flow regimes, the transport probability curves in contact mode for a given relative particle size of greater than or equal to unity attain their peaks corresponding to the averaged critical Shields numbers, from where the transport probability curves in detachment mode initiate. At an inception of particle transport, the mean probabilities in both the modes increase feebly with an increase in shear Reynolds number. Further, for a given particle size, the mean probability in contact mode increases with a decrease in critical Shields number attaining a critical value and then increases. However, the mean probability in detachment mode increases with a decrease in critical Shields number.
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
Tempest, Elizabeth L; Carter, Ben; Beck, Charles R; Rubin, G James
2017-12-01
The impact of flooding on mental health is exacerbated due to secondary stressors, although the mechanism of action is not understood. We investigated the role of secondary stressors on psychological outcomes through analysis of data collected one-year after flooding, and effect modification by sex. We analysed data from the English National Study on Flooding and Health collected from households flooded, disrupted and unexposed to flooding during 2013-14. Psychological outcomes were probable depression, anxiety and post-traumatic stress disorder (PTSD). Parsimonious multivariable logistic regression models were fitted to determine the effect of secondary stressors on the psychological outcomes. Sex was tested as an effect modifier using subgroup analyses. A total of 2006 people participated (55.5% women, mean age 60 years old). Participants reporting concerns about their personal health and that of their family (concerns about health) had greater odds of probable depression (adjusted odds ratio [aOR] 1.77, 95% CI 1.17-2.65) and PTSD (aOR 2.58, 95% CI 1.82-3.66). Loss of items of sentimental value was associated with probable anxiety (aOR 1.82, 95% CI 1.26-2.62). For women, the strongest associations were between concerns about health and probable PTSD (aOR 2.86, 95% CI 1.79-4.57). For men, the strongest associations were between 'relationship problems' and probable depression (aOR 3.25, 95% CI 1.54-6.85). Concerns about health, problems with relationships and loss of sentimental items were consistently associated with poor psychological outcomes. Interventions to reduce the occurrence of these secondary stressors are needed to mitigate the impact of flooding on probable psychological morbidity. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association.
Tempest, Elizabeth L.; Carter, Ben; Beck, Charles R.; Rubin, G. James
2017-01-01
Abstract Background The impact of flooding on mental health is exacerbated due to secondary stressors, although the mechanism of action is not understood. We investigated the role of secondary stressors on psychological outcomes through analysis of data collected one-year after flooding, and effect modification by sex. Methods We analysed data from the English National Study on Flooding and Health collected from households flooded, disrupted and unexposed to flooding during 2013–14. Psychological outcomes were probable depression, anxiety and post-traumatic stress disorder (PTSD). Parsimonious multivariable logistic regression models were fitted to determine the effect of secondary stressors on the psychological outcomes. Sex was tested as an effect modifier using subgroup analyses. Results A total of 2006 people participated (55.5% women, mean age 60 years old). Participants reporting concerns about their personal health and that of their family (concerns about health) had greater odds of probable depression (adjusted odds ratio [aOR] 1.77, 95% CI 1.17–2.65) and PTSD (aOR 2.58, 95% CI 1.82–3.66). Loss of items of sentimental value was associated with probable anxiety (aOR 1.82, 95% CI 1.26–2.62). For women, the strongest associations were between concerns about health and probable PTSD (aOR 2.86, 95% CI 1.79–4.57). For men, the strongest associations were between ‘relationship problems’ and probable depression (aOR 3.25, 95% CI 1.54–6.85). Conclusions Concerns about health, problems with relationships and loss of sentimental items were consistently associated with poor psychological outcomes. Interventions to reduce the occurrence of these secondary stressors are needed to mitigate the impact of flooding on probable psychological morbidity. PMID:29087460
Impact of probability estimation on frequency of urine culture requests in ambulatory settings.
Gul, Naheed; Quadri, Mujtaba
2012-07-01
To determine the perceptions of the medical community about urine culture in diagnosing urinary tract infections. The cross-sectional survey based of consecutive sampling was conducted at Shifa International Hospital, Islamabad, on 200 doctors, including medical students of the Shifa College of Medicine, from April to October 2010. A questionnaire with three common clinical scenarios of low, intermediate and high pre-test probability for urinary tract infection was used to assess the behaviour of the respondents to make a decision for urine culture test. The differences between the reference estimates and the respondents' estimates of pre- and post-test probability were assessed. The association of estimated probabilities with the number of tests ordered was also evaluated. The respondents were also asked about the cost effectiveness and safety of urine culture and sensitivity. Data was analysed using SPSS version 15. In low pre-test probability settings, the disease probability was over-estimated, suggesting the participants' inability to rule out the disease. The post-test probabilities were, however, under-estimated by the doctors as compared to the students. In intermediate and high pre-test probability settings, both over- and underestimation of probabilities were noticed. Doctors were more likely to consider ordering the test as the disease probability increased. Most of the respondents were of the opinion that urine culture was a cost-effective test and there was no associated potential harm. The wide variation in the clinical use of urine culture necessitates the formulation of appropriate guidelines for the diagnostic use of urine culture, and application of Bayesian probabilistic thinking to real clinical situations.
Jiang, Long; Situ, Dongrong; Lin, Yongbin; Su, Xiaodong; Zheng, Yan; Zhang, Yigong; Long, Hao
2013-11-01
Effective strategies for managing patients with pulmonary focal Ground-glass Opacity (fGGO) depend on the pretest probability of malignancy. Estimating a clinical probability of malignancy in patients with fGGOs can facilitate the selection and interpretation of subsequent diagnostic tests. METHODS : Data from patients with pulmonary fGGO lesions, who were diagnosed at Sun Yat-sen University Cancer Center, was retrospectively collected. Multiple logistic regression analysis was used to identify independent clinical predictors for malignancy and to develop a clinical predictive model to estimate the pretest probability of malignancy in patients with fGGOs. One hundred and sixty-five pulmonary fGGO nodules were detected in 128 patients. Independent predictors for malignant fGGOs included a history of other cancers (odds ratio [OR], 0.264; 95% confidence interval [CI], 0.072 to 0.970), pleural indentation (OR, 8.766; 95% CI, 3.033-25.390), vessel-convergence sign (OR, 23.626; 95% CI, 6.200 to 90.027) and air bronchogram (OR, 7.41; 95% CI, 2.037 to 26.961). Model accuracy was satisfactory (area under the curve of the receiver operating characteristic, 0.934; 95% CI, 0.894 to 0.975), and there was excellent agreement between the predicted probability and the observed frequency of malignant fGGOs. We have developed a predictive model, which could be used to generate pretest probabilities of malignant fGGOs, and the equation could be incorporated into a formal decision analysis. © 2013 Tianjin Lung Cancer Institute and Wiley Publishing Asia Pty Ltd.
Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta
2014-06-01
To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.
Kirschner, Wolf; Dudenhausen, Joachim W; Henrich, Wolfgang
2016-04-01
The conditions of iron deficiency are highly incident in pregnancy with elevated risks for preterm birth and low birth weight. In our recent study, we found 6% of participants having anemia, whereas between 39% and 47% showed iron deficiency without anemia. In many countries in prenatal care solely hemoglobin (Hb) measurement is applied. For the gynecologists till date there is no indication to determine other markers (e.g., serum-ferritin). As iron deficiency results from an imbalance between intake and loss of iron, our aim was to find out if the risk of iron deficiency conditions can be estimated by a diet history protocol as well as questionnaires to find about iron loss. We found that the risk of having iron deficiency in upper gestational week (>=21) increased by a factor of five. Thus, additional diagnostics should be done in this group by now. Using the questionnaire as a screening instrument, we further estimated the probability of disease in terms of a positive likelihood ratio (LR+). The positive LR for the group below 21th week of gestation is 1.9 thus, increasing the post-test probability to 52% from 36% as before. Further research based on higher sample sizes will show if the ratios can be increased further.
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2004-10-12
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System For Surveillance Of Spectral Signals
Gross, Kenneth C.; Wegerich, Stephan; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2003-04-22
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2006-02-14
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
System for surveillance of spectral signals
Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.
2001-01-01
A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a SPRT sequential probability ratio test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.
Expert system for testing industrial processes and determining sensor status
Gross, K.C.; Singer, R.M.
1998-06-02
A method and system are disclosed for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 24 figs.
Expert system for testing industrial processes and determining sensor status
Gross, Kenneth C.; Singer, Ralph M.
1998-01-01
A method and system for monitoring both an industrial process and a sensor. The method and system include determining a minimum number of sensor pairs needed to test the industrial process as well as the sensor for evaluating the state of operation of both. The technique further includes generating a first and second signal characteristic of an industrial process variable. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the pair of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test.
Boundary layer integral matrix procedure: Verification of models
NASA Technical Reports Server (NTRS)
Bonnett, W. S.; Evans, R. M.
1977-01-01
The three turbulent models currently available in the JANNAF version of the Aerotherm Boundary Layer Integral Matrix Procedure (BLIMP-J) code were studied. The BLIMP-J program is the standard prediction method for boundary layer effects in liquid rocket engine thrust chambers. Experimental data from flow fields with large edge-to-wall temperature ratios are compared to the predictions of the three turbulence models contained in BLIMP-J. In addition, test conditions necessary to generate additional data on a flat plate or in a nozzle are given. It is concluded that the Cebeci-Smith turbulence model be the recommended model for the prediction of boundary layer effects in liquid rocket engines. In addition, the effects of homogeneous chemical reaction kinetics were examined for a hydrogen/oxygen system. Results show that for most flows, kinetics are probably only significant for stoichiometric mixture ratios.
Exclusion probabilities and likelihood ratios with applications to mixtures.
Slooten, Klaas-Jan; Egeland, Thore
2016-01-01
The statistical evidence obtained from mixed DNA profiles can be summarised in several ways in forensic casework including the likelihood ratio (LR) and the Random Man Not Excluded (RMNE) probability. The literature has seen a discussion of the advantages and disadvantages of likelihood ratios and exclusion probabilities, and part of our aim is to bring some clarification to this debate. In a previous paper, we proved that there is a general mathematical relationship between these statistics: RMNE can be expressed as a certain average of the LR, implying that the expected value of the LR, when applied to an actual contributor to the mixture, is at least equal to the inverse of the RMNE. While the mentioned paper presented applications for kinship problems, the current paper demonstrates the relevance for mixture cases, and for this purpose, we prove some new general properties. We also demonstrate how to use the distribution of the likelihood ratio for donors of a mixture, to obtain estimates for exceedance probabilities of the LR for non-donors, of which the RMNE is a special case corresponding to L R>0. In order to derive these results, we need to view the likelihood ratio as a random variable. In this paper, we describe how such a randomization can be achieved. The RMNE is usually invoked only for mixtures without dropout. In mixtures, artefacts like dropout and drop-in are commonly encountered and we address this situation too, illustrating our results with a basic but widely implemented model, a so-called binary model. The precise definitions, modelling and interpretation of the required concepts of dropout and drop-in are not entirely obvious, and we attempt to clarify them here in a general likelihood framework for a binary model.
2011-01-01
Background Stratifying patients with a sore throat into the probability of having an underlying bacterial or viral cause may be helpful in targeting antibiotic treatment. We sought to assess the diagnostic accuracy of signs and symptoms and validate a clinical prediction rule (CPR), the Centor score, for predicting group A β-haemolytic streptococcal (GABHS) pharyngitis in adults (> 14 years of age) presenting with sore throat symptoms. Methods A systematic literature search was performed up to July 2010. Studies that assessed the diagnostic accuracy of signs and symptoms and/or validated the Centor score were included. For the analysis of the diagnostic accuracy of signs and symptoms and the Centor score, studies were combined using a bivariate random effects model, while for the calibration analysis of the Centor score, a random effects model was used. Results A total of 21 studies incorporating 4,839 patients were included in the meta-analysis on diagnostic accuracy of signs and symptoms. The results were heterogeneous and suggest that individual signs and symptoms generate only small shifts in post-test probability (range positive likelihood ratio (+LR) 1.45-2.33, -LR 0.54-0.72). As a decision rule for considering antibiotic prescribing (score ≥ 3), the Centor score has reasonable specificity (0.82, 95% CI 0.72 to 0.88) and a post-test probability of 12% to 40% based on a prior prevalence of 5% to 20%. Pooled calibration shows no significant difference between the numbers of patients predicted and observed to have GABHS pharyngitis across strata of Centor score (0-1 risk ratio (RR) 0.72, 95% CI 0.49 to 1.06; 2-3 RR 0.93, 95% CI 0.73 to 1.17; 4 RR 1.14, 95% CI 0.95 to 1.37). Conclusions Individual signs and symptoms are not powerful enough to discriminate GABHS pharyngitis from other types of sore throat. The Centor score is a well calibrated CPR for estimating the probability of GABHS pharyngitis. The Centor score can enhance appropriate prescribing of antibiotics, but should be used with caution in low prevalence settings of GABHS pharyngitis such as primary care. PMID:21631919
Salinas, Maria; López-Garrigós, Maite; Flores, Emilio; Leiva-Salinas, Carlos
2017-12-22
To study the variability in the request of serum uric acid (SUA) in primary care. A cross-sectional study was designed and conducted at a main core laboratory. Spanish laboratories were invited to report their number of serum glucose (SG) and SUA tests requested from primary care during 2014. A survey was sent to every participant in November 2016 regarding the inclusion of SUA in order profiles/panels. The ratio of SUA/SG requests (SUA/SG) was calculated and compared between regions, and laboratories depending on whether SUA was included or not in a health check profile. 110 laboratories participated in the study (59.8% Spanish population). The median SUA/SG ratio was 0.82 (IQR: 0.25), and 41 laboratories had a ratio over 0.9. There was a significant regional variability (P = .008). Laboratories where SUA was not included in the "health check profile" had lower SUA/SG indicators (P = .003). There was significant regional variability in the request of SUA, and an overall over-request. Different regional customs or habits and the inclusion of SUA in the health check profile were probable causes behind the observed over-request. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Teixeira, João L G; Rabaioli, Paola; Savaris, Ricardo F
2015-01-01
To evaluate the performance of a commercial urinary test to screen for abnormal first trimester pregnancies in women presenting to an emergency room. In this prospective observational cohort, women with a confirmed first trimester pregnancy (gestational age <12 weeks) provided a urine sample for diagnosing the viability of their gestation. Pregnancy viability and location testing were confirmed by ultrasound and/or laparoscopy. From 815 eligible patients for the study, 12 were excluded for not having a confirmed pregnancy (n = 6) or were lost to follow-up (n = 6). A total of 803 patients underwent testing and completed follow-up. The pretest probability of an abnormal pregnancy was 44% (9% for ectopic pregnancy and 35% for miscarriage). The test had the following parameters to identify an abnormal first-trimester pregnancy (sensitivity, 13%; 95% confidence interval [CI], 10-17; specificity, 82%; 95% CI, 78-86; positive predictive value, 36; 95% CI, 28-46; negative predictive value, 54; 95% CI, 50-58; accuracy, 47%; positive likelihood ratio, 0.74; 95% CI, 0.53-1.03; negative likelihood ratio, 1.06; 95% CI, 1-1.12). The reproducibility of the test in our study was high (kappa index between readers, 0.89; 95% CI, 0.77-1). In our emergency setting, we were not able to confirm that the commercial test is adequate to detect or exclude an abnormal first-trimester pregnancy. Copyright © 2015 Elsevier Inc. All rights reserved.
Patient or physician preferences for decision analysis: the prenatal genetic testing decision.
Heckerling, P S; Verp, M S; Albert, N
1999-01-01
The choice between amniocentesis and chorionic villus sampling for prenatal genetic testing involves tradeoffs of the benefits and risks of the tests. Decision analysis is a method of explicitly weighing such tradeoffs. The authors examined the relationship between prenatal test choices made by patients and the choices prescribed by decision-analytic models based on their preferences, and separate models based on the preferences of their physicians. Preferences were assessed using written scenarios describing prenatal testing outcomes, and were recorded on linear rating scales. After adjustment for sociodemographic and obstetric confounders, test choice was significantly associated with the choice of decision models based on patient preferences (odds ratio 4.44; Cl, 2.53 to 7.78), but not with the choice of models based on the preferences of the physicians (odds ratio 1.60; Cl, 0.79 to 3.26). Agreement between decision analyses based on patient preferences and on physician preferences was little better than chance (kappa = 0.085+/-0.063). These results were robust both to changes in the decision-analytic probabilities and to changes in the model structure itself to simulate non-expected utility decision rules. The authors conclude that patient but not physician preferences, incorporated in decision models, correspond to the choice of amniocentesis or chorionic villus sampling made by the patient. Nevertheless, because patient preferences were assessed after referral for genetic testing, prospective preference-assessment studies will be necessary to confirm this association.
Index/Ring Finger Ratio, Hand and Foot Index: Gender Estimation Tools.
Gupta, Sonia; Gupta, Vineeta; Tyagi, Nutan; Ettishree; Bhagat, Sinthia; Dadu, Mohit; Anthwal, Nishita; Ashraf, Tahira
2017-06-01
Gender estimation from dismembered human body parts and skeletal remains in cases of mass disasters, explosions, and assaults cases is an imperative element of any medico-legal investigations and has been a major challenge for forensic scientists. The aim of the present study was to estimate the gender by using index and ring finger length ratio, hand and foot index along with the correlation of both the hand and foot index to determine the vital role of all the indices in establishing gender identity. A descriptive cross-sectional study was done on 300 subjects (150 males and 150 females). Various anthropometric measurements like hand length, hand breadth and hand index, Index Finger Length (IFL), Ring Finger Length (RFL) and IFL/RFL ratio as well as foot length, foot breadth and foot index were estimated in millimeters (mm) with the help of sliding-anthropometric caliper. The data was analysed using independent t-test and Pearson correlation coefficient test. A probability value (p) of ≤ 0.05 was considered statistically significant. The index and ring finger ratio was found to be higher in females as compared to males. The hand and foot index was more in males than in females. The index and ring finger length ratio, hand and foot index between males and females was found to be statistically significant for both hands and feet. A statistically significant correlation was determined between hand indexes versus foot index. This study can be useful to establish the gender of a dismembered hand or foot when subjected for medicolegal examination.
Sample Size Determination for Rasch Model Tests
ERIC Educational Resources Information Center
Draxler, Clemens
2010-01-01
This paper is concerned with supplementing statistical tests for the Rasch model so that additionally to the probability of the error of the first kind (Type I probability) the probability of the error of the second kind (Type II probability) can be controlled at a predetermined level by basing the test on the appropriate number of observations.…
Statistical Inference in Graphical Models
2008-06-17
fuse probability theory and graph theory in such a way as to permit efficient rep- resentation and computation with probability distributions. They...message passing. 59 viii 1. INTRODUCTION In approaching real-world problems, we often need to deal with uncertainty. Probability and statis- tics provide a...dynamic programming methods. However, for many sensors of interest, the signal-to-noise ratio does not allow such a treatment. Another source of
Pham, Huy P; Sireci, Anthony N; Kim, Chong H; Schwartz, Joseph
2014-09-01
Both plasma- and recombinant activated factor VII (rFVIIa)-based algorithms can be used to correct coagulopathy in preliver transplant patients with acute liver failure requiring intracranial pressure monitor (ICPM) placement. A decision model was created to compare the cost-effectiveness of these methods. A 70-kg patient could receive either 1 round of plasma followed by coagulation testing or 2 units of plasma and 40 μg/kg rFVIIa. Intracranial pressure monitor is placed without coagulation testing after rFVIIa administration. In the plasma algorithm, the probability of ICPM placement was estimated based on expected international normalized ratio (INR) after plasma administration. Risks of rFVIIa thrombosis and transfusion reactions were also included. The model was run for patients with INRs ranging from 2 to 6 with concomitant adjustments to model parameters. The model supported the initial use of rFVIIa for ICPM placement as a cost-effective treatment when INR ≥2 (with incremental cost-effectiveness ratio of at most US$7088.02). © The Author(s) 2014.
Decroo, Tom; Henríquez-Trujillo, Aquiles R; De Weggheleire, Anja; Lynen, Lutgarde
2017-10-11
A recently published Ugandan study on tuberculosis (TB) diagnosis in HIV-positive patients with presumptive smear-negative TB, which showed that out of 90 patients who started TB treatment, 20% (18/90) had a positive Xpert MTB/RIF (Xpert) test, 24% (22/90) had a negative Xpert test, and 56% (50/90) were started without Xpert testing. Although Xpert testing was available, clinicians did not use it systematically. Here we aim to show more objectively the process of clinical decision-making. First, we estimated that pre-test probability of TB, or the prevalence of TB in smear-negative HIV infected patients with signs of presumptive TB in Uganda, was 17%. Second, we argue that the treatment threshold, the probability of disease at which the utility of treating and not treating is the same, and above which treatment should be started, should be determined. In Uganda, the treatment threshold was not yet formally established. In Rwanda, the calculated treatment threshold was 12%. Hence, one could argue that the threshold was reached without even considering additional tests. Still, Xpert testing can be useful when the probability of disease is above the treatment threshold, but only when a negative Xpert result can lower the probability of disease enough to cross the treatment threshold. This occurs when the pre-test probability is lower than the test-treat threshold, the probability of disease at which the utility of testing and the utility of treating without testing is the same. We estimated that the test-treatment threshold was 28%. Finally, to show the effect of the presence or absence of arguments on the probability of TB, we use confirming and excluding power, and a log10 odds scale to combine arguments. If the pre-test probability is above the test-treat threshold, empirical treatment is justified, because even a negative Xpert will not lower the post-test probability below the treatment threshold. However, Xpert testing for the diagnosis of TB should be performed in patients for whom the probability of TB was lower than the test-treat threshold. Especially in resource constrained settings clinicians should be encouraged to take clinical decisions and use scarce resources rationally.
1981-06-01
for a de- tection probability of PD and associated false alarm probability PFA (in dB). 21 - - - II V. REFERENCE MODEL A. INTRODUCTION In order to...space for which to choose HI . PFA = P (wI 0o)dw = Q(---) (26) j 0 Similarity, the miss probability=l-detection probability is obtained by integrating...31) = 2 (1+ (22 [()BT z] ~Z The input signal-to-noise ratio: S/N(input) - a2 (32) The probability of false alarm: PFA = Q[ tB(j-I) 1 (33) The
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Carpenter, J. Russell
2016-01-01
The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Carpenter, Russell
2016-01-01
The cadence of proximity operations for the OSIRIS-REx mission may have an extra induced challenge given the potential of the detection of a natural satellite orbiting the asteroid Bennu. Current ground radar observations for object detection orbiting Bennu show no found objects within bounds of specific size and rotation rates. If a natural satellite is detected during approach, a different proximity operation cadence will need to be implemented as well as a collision avoidance strategy for mission success. A collision avoidance strategy will be analyzed using the Wald Sequential Probability Ratio Test.
Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions
Burke, Timothy P.; Kiedrowski, Brian C.
2017-12-11
Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less
Tygert, Mark
2010-09-21
We discuss several tests for determining whether a given set of independent and identically distributed (i.i.d.) draws does not come from a specified probability density function. The most commonly used are Kolmogorov-Smirnov tests, particularly Kuiper's variant, which focus on discrepancies between the cumulative distribution function for the specified probability density and the empirical cumulative distribution function for the given set of i.i.d. draws. Unfortunately, variations in the probability density function often get smoothed over in the cumulative distribution function, making it difficult to detect discrepancies in regions where the probability density is small in comparison with its values in surrounding regions. We discuss tests without this deficiency, complementing the classical methods. The tests of the present paper are based on the plain fact that it is unlikely to draw a random number whose probability is small, provided that the draw is taken from the same distribution used in calculating the probability (thus, if we draw a random number whose probability is small, then we can be confident that we did not draw the number from the same distribution used in calculating the probability).
The optimal imaging strategy for patients with stable chest pain: a cost-effectiveness analysis.
Genders, Tessa S S; Petersen, Steffen E; Pugliese, Francesca; Dastidar, Amardeep G; Fleischmann, Kirsten E; Nieman, Koen; Hunink, M G Myriam
2015-04-07
The optimal imaging strategy for patients with stable chest pain is uncertain. To determine the cost-effectiveness of different imaging strategies for patients with stable chest pain. Microsimulation state-transition model. Published literature. 60-year-old patients with a low to intermediate probability of coronary artery disease (CAD). Lifetime. The United States, the United Kingdom, and the Netherlands. Coronary computed tomography (CT) angiography, cardiac stress magnetic resonance imaging, stress single-photon emission CT, and stress echocardiography. Lifetime costs, quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios. The strategy that maximized QALYs and was cost-effective in the United States and the Netherlands began with coronary CT angiography, continued with cardiac stress imaging if angiography found at least 50% stenosis in at least 1 coronary artery, and ended with catheter-based coronary angiography if stress imaging induced ischemia of any severity. For U.K. men, the preferred strategy was optimal medical therapy without catheter-based coronary angiography if coronary CT angiography found only moderate CAD or stress imaging induced only mild ischemia. In these strategies, stress echocardiography was consistently more effective and less expensive than other stress imaging tests. For U.K. women, the optimal strategy was stress echocardiography followed by catheter-based coronary angiography if echocardiography induced mild or moderate ischemia. Results were sensitive to changes in the probability of CAD and assumptions about false-positive results. All cardiac stress imaging tests were assumed to be available. Exercise electrocardiography was included only in a sensitivity analysis. Differences in QALYs among strategies were small. Coronary CT angiography is a cost-effective triage test for 60-year-old patients who have nonacute chest pain and a low to intermediate probability of CAD. Erasmus University Medical Center.
Jermacane, Daiga; Waite, Thomas David; Beck, Charles R; Bone, Angie; Amlôt, Richard; Reacher, Mark; Kovats, Sari; Armstrong, Ben; Leonardi, Giovanni; James Rubin, G; Oliver, Isabel
2018-03-07
The longer term impact of flooding on health is poorly understood. In 2015, following widespread flooding in the UK during winter 2013/14, Public Health England launched the English National Study of Flooding and Health. The study identified a higher prevalence of probable psychological morbidity one year after exposure to flooding. We now report findings after two years. In year two (2016), a self-assessment questionnaire including flooding-related exposures and validated instruments to screen for probable anxiety, depression and post-traumatic stress disorder (PTSD) was sent to all participants who consented to further follow-up. Participants exposure status was categorised according to responses in year one; we assessed for exposure to new episodes of flooding and continuing flood-related problems in respondents homes. We calculated the prevalence and odds ratio for each outcome by exposure group relative to unaffected participants, adjusting for confounders. We used the McNemar test to assess change in outcomes between year one and year two. In year two, 1064 (70%) people responded. The prevalence of probable psychological morbidity remained elevated amongst flooded participants [n = 339] (depression 10.6%, anxiety 13.6%, PTSD 24.5%) and disrupted participants [n = 512] (depression 4.1%, anxiety 6.4%, PTSD 8.9%), although these rates were reduced compared to year one. A greater reduction in anxiety 7.6% (95% confidence interval [CI] 4.6-9.9) was seen than depression 3.8% (95% CI 1.5-6.1) and PTSD: 6.6% (95% CI 3.9-9.2). Exposure to flooding was associated with a higher odds of anxiety (adjusted odds ratio [aOR] 5.2 95%, 95% CI 1.7-16.3) and depression (aOR 8.7, 95% CI 1.9-39.8) but not PTSD. Exposure to disruption caused by flooding was not significantly associated with probable psychological morbidity. Persistent damage in the home as a consequence of the original flooding event was reported by 119 participants (14%). The odds of probable psychological morbidity amongst flooded participants who reported persistent damage, compared with those who were unaffected, were significantly higher than the same comparison amongst flooded participants who did not report persistent damage. This study shows a continuance of probable psychological morbidity at least two years following exposure to flooding. Commissioners and providers of health and social care services should be aware that the increased need in populations may be prolonged. Efforts to resolve persistent damage to homes may reduce the risk of probable psychological morbidity.
Dosso, Stan E; Wilmut, Michael J; Nielsen, Peter L
2010-07-01
This paper applies Bayesian source tracking in an uncertain environment to Mediterranean Sea data, and investigates the resulting tracks and track uncertainties as a function of data information content (number of data time-segments, number of frequencies, and signal-to-noise ratio) and of prior information (environmental uncertainties and source-velocity constraints). To track low-level sources, acoustic data recorded for multiple time segments (corresponding to multiple source positions along the track) are inverted simultaneously. Environmental uncertainty is addressed by including unknown water-column and seabed properties as nuisance parameters in an augmented inversion. Two approaches are considered: Focalization-tracking maximizes the posterior probability density (PPD) over the unknown source and environmental parameters. Marginalization-tracking integrates the PPD over environmental parameters to obtain a sequence of joint marginal probability distributions over source coordinates, from which the most-probable track and track uncertainties can be extracted. Both approaches apply track constraints on the maximum allowable vertical and radial source velocity. The two approaches are applied for towed-source acoustic data recorded at a vertical line array at a shallow-water test site in the Mediterranean Sea where previous geoacoustic studies have been carried out.
75 FR 66271 - Assessment Dividends, Assessment Rates and Designated Reserve Ratio
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... has recovered to pre-crisis levels, and the long term, when the reserve ratio is sufficiently large... detail below, concludes that a moderate, long-term average industry assessment rate, combined with an... and earnings of IDIs. Long Term To increase the probability that the fund reserve ratio will reach a...
High spatial resolution Mg/Al maps of the western Crisium and Sulpicius Gallus regions
NASA Technical Reports Server (NTRS)
Schonfeld, E.
1982-01-01
High spatial resolution Mg/Al ratio maps of the western Crisium and Sulpicius Gallus regions of the moon are presented. The data is from the X-ray fluorescence experiment and the image enhancement technique in the Laplacian subtraction method using a special least-squares version of the Laplacian to reduce noise amplification. In the highlands region west of Mare Crisium several relatively small patches of smooth material have high local Mg/Al ratio similar to values found in mare sites, suggesting volcanism in the highlands. In the same highland region there were other smooth areas with no high Mg/Al local values and they are probably Cayley Formation material produced by impact mass wasting. The Sulpicius Gallus region has variable Mg/Al ratios. In this region there are several high Mg/Al ratio spots, two of which occur at the highland-mare interface. Another high Mg/Al ratio area corresponds to the Sulpicius Gallus Rima I region. The high Mg/Al ratio material in the Sulpicius Gallus region is probably pyroclastic.
Low Emissions RQL Flametube Combustor Component Test Results
NASA Technical Reports Server (NTRS)
Holdeman, James D.; Chang, Clarence T.
2001-01-01
This report describes and summarizes elements of the High Speed Research (HSR) Low Emissions Rich burn/Quick mix/Lean burn (RQL) flame tube combustor test program. This test program was performed at NASA Glenn Research Center circa 1992. The overall objective of this test program was to demonstrate and evaluate the capability of the RQL combustor concept for High Speed Civil Transport (HSCT) applications with the goal of achieving NOx emission index levels of 5 g/kg-fuel at representative HSCT supersonic cruise conditions. The specific objectives of the tests reported herein were to investigate component performance of the RQL combustor concept for use in the evolution of ultra-low NOx combustor design tools. Test results indicated that the RQL combustor emissions and performance at simulated supersonic cruise conditions were predominantly sensitive to the quick mixer subcomponent performance and not sensitive to fuel injector performance. Test results also indicated the mixing section configuration employing a single row of circular holes was the lowest NOx mixer tested probably due to the initial fast mixing characteristics of this mixing section. However, other quick mix orifice configurations such as the slanted slot mixer produced substantially lower levels of carbon monoxide emissions most likely due to the enhanced circumferential dispersion of the air addition. Test results also suggested that an optimum momentum-flux ratio exists for a given quick mix configuration. This would cause undesirable jet under- or over-penetration for test conditions with momentum-flux ratios below or above the optimum value. Tests conducted to assess the effect of quick mix flow area indicated that reduction in the quick mix flow area produced lower NOx emissions at reduced residence time, but this had no effect on NOx emissions measured at similar residence time for the configurations tested.
Lear, Aaron; Huber, Merritt; Canada, Amy; Robertson, Jessica; Bosman, Evan; Zyzanski, Stephen
2018-01-01
To determine whether admission, and provocative stress testing of patients who have ruled out for acute coronary syndrome put patients with low-risk category for coronary artery disease (CAD) at risk for false-positive provocative stress testing and unnecessary coronary angiogram/imaging. A retrospective chart review was performed on patients between 30 and 70 years old, with no pre-existing diagnosis of CAD, admitted to observation or inpatient status chest pain or related complaints. Included patients were categorized based on Duke Clinical Score for pretest probability for CAD into either low-risk group, or moderate/high-risk group. The inpatient course was compared including whether provocative stress testing was performed; results of stress testing; whether patients underwent further coronary imaging; and what the results of the further imaging showed. 543 patients were eligible: 305 low pretest probability, and 238 moderate/high pretest probability. No difference was found in rate of stress testing relative risk (RR) = 1.01 (95% CI, 0.852 to 1.192; P = 0); rate of positive or equivocal stress tests between the 2 groups: RR = 0.653 (95% CI, 0.415 to 1.028; P = .07,). Low-pretest-probability patients had a lower likelihood of positive coronary imaging after stress test, RR = 0.061 (95% CI, 0.004 to 0.957; P = .001). Follow-up provocative testing of all patients admitted/observed after emergency department presentation with chest pain is unlikely to find CAD in patients with low pretest probability. Testing all low-probability patients puts them at increased risk for unnecessary invasive confirmatory testing. Further prospective testing is needed to confirm these retrospective results. © Copyright 2018 by the American Board of Family Medicine.
Agoritsas, Thomas; Courvoisier, Delphine S; Combescure, Christophe; Deom, Marie; Perneger, Thomas V
2011-04-01
The probability of a disease following a diagnostic test depends on the sensitivity and specificity of the test, but also on the prevalence of the disease in the population of interest (or pre-test probability). How physicians use this information is not well known. To assess whether physicians correctly estimate post-test probability according to various levels of prevalence and explore this skill across respondent groups. Randomized trial. Population-based sample of 1,361 physicians of all clinical specialties. We described a scenario of a highly accurate screening test (sensitivity 99% and specificity 99%) in which we randomly manipulated the prevalence of the disease (1%, 2%, 10%, 25%, 95%, or no information). We asked physicians to estimate the probability of disease following a positive test (categorized as <60%, 60-79%, 80-94%, 95-99.9%, and >99.9%). Each answer was correct for a different version of the scenario, and no answer was possible in the "no information" scenario. We estimated the proportion of physicians proficient in assessing post-test probability as the proportion of correct answers beyond the distribution of answers attributable to guessing. Most respondents in each of the six groups (67%-82%) selected a post-test probability of 95-99.9%, regardless of the prevalence of disease and even when no information on prevalence was provided. This answer was correct only for a prevalence of 25%. We estimated that 9.1% (95% CI 6.0-14.0) of respondents knew how to assess correctly the post-test probability. This proportion did not vary with clinical experience or practice setting. Most physicians do not take into account the prevalence of disease when interpreting a positive test result. This may cause unnecessary testing and diagnostic errors.
Seaton, Sarah E; Manktelow, Bradley N
2012-07-16
Emphasis is increasingly being placed on the monitoring of clinical outcomes for health care providers. Funnel plots have become an increasingly popular graphical methodology used to identify potential outliers. It is assumed that a provider only displaying expected random variation (i.e. 'in-control') will fall outside a control limit with a known probability. In reality, the discrete count nature of these data, and the differing methods, can lead to true probabilities quite different from the nominal value. This paper investigates the true probability of an 'in control' provider falling outside control limits for the Standardised Mortality Ratio (SMR). The true probabilities of an 'in control' provider falling outside control limits for the SMR were calculated and compared for three commonly used limits: Wald confidence interval; 'exact' confidence interval; probability-based prediction interval. The probability of falling above the upper limit, or below the lower limit, often varied greatly from the nominal value. This was particularly apparent when there were a small number of expected events: for expected events ≤ 50 the median probability of an 'in-control' provider falling above the upper 95% limit was 0.0301 (Wald), 0.0121 ('exact'), 0.0201 (prediction). It is important to understand the properties and probability of being identified as an outlier by each of these different methods to aid the correct identification of poorly performing health care providers. The limits obtained using probability-based prediction limits have the most intuitive interpretation and their properties can be defined a priori. Funnel plot control limits for the SMR should not be based on confidence intervals.
Amplitude-independent flaw length determination using differential eddy current
NASA Astrophysics Data System (ADS)
Shell, E.
2013-01-01
Military engine component manufacturers typically specify the eddy current (EC) inspection requirements as a crack length or depth with the assumption that the cracks in both the test specimens and inspected component are of a similar fixed aspect ratio. However, differential EC response amplitude is dependent on the area of the crack face, not the length or depth. Additionally, due to complex stresses, in-service cracks do not always grow in the assumed manner. It would be advantageous to use more of the information contained in the EC data to better determine the full profile of cracks independent of the fixed aspect ratio amplitude response curve. A specimen with narrow width notches is used to mimic cracks of varying aspect ratios in a controllable manner. The specimen notches have aspect ratios that vary from 1:1 to 10:1. Analysis routines have been developed using the shape of the EC response signals that can determine the length of a surface flaw of common orientations without use of the amplitude of the signal or any supporting traditional probability of detection basis. Combined with the relationship between signal amplitude and area, the depth of the flaw can also be calculated.
Stress Reduces Conception Probabilities across the Fertile Window: Evidence in Support of Relaxation
Buck Louis, Germaine M.; Lum, Kirsten J.; Sundaram, Rajeshwari; Chen, Zhen; Kim, Sungduk; Lynch, Courtney D.; Schisterman, Enrique F.; Pyper, Cecilia
2010-01-01
Objective To assess salivary stress biomarkers (cortisol and alpha-amylase) and female fecundity. Design Prospective cohort design. Setting United Kingdom. Patients 274 women aged 18–40 years attempting pregnancy were followed until pregnant or for six menstrual cycles. Women collected basal saliva samples on day 6 of each cycle, and used fertility monitors to identify ovulation and pregnancy test kits for pregnancy detection. Main Outcome Measures Exposures included salivary cortisol (μg/dL) and alpha-amylase (U/mL) concentrations. Fecundity was measured by time-to-pregnancy and the probability of pregnancy during the fertile window as estimated from discrete-time survival and Bayesian modeling techniques, respectively. Results Alpha-amylase but not cortisol concentrations were negatively associated with fecundity in the first cycle (fecundity odds ratio = 0.85; 95% confidence interval 0.67, 1.09) after adjusting for couples’ ages, intercourse frequency, and alcohol consumption. Significant reductions in the probability of conception across the fertile window during the first cycle attempting pregnancy were observed for women whose salivary concentrations of alpha-amylase were in the upper quartiles in comparison to women in the lower quartiles (HPD −0.284; 95% interval −0.540, −0.029). Conclusions Stress significantly reduced the probability of conception each day during the fertile window, possibly exerting its effect through the sympathetic medullar pathway. PMID:20688324
An operational system of fire danger rating over Mediterranean Europe
NASA Astrophysics Data System (ADS)
Pinto, Miguel M.; DaCamara, Carlos C.; Trigo, Isabel F.; Trigo, Ricardo M.
2017-04-01
A methodology is presented to assess fire danger based on the probability of exceedance of prescribed thresholds of daily released energy. The procedure is developed and tested over Mediterranean Europe, defined by latitude circles of 35 and 45°N and meridians of 10°W and 27.5°E, for the period 2010-2016. The procedure involves estimating the so-called static and daily probabilities of exceedance. For a given point, the static probability is estimated by the ratio of the number of daily fire occurrences releasing energy above a given threshold to the total number of occurrences inside a cell centred at the point. The daily probability of exceedance which takes into account meteorological factors by means of the Canadian Fire Weather Index (FWI) is in turn estimated based on a Generalized Pareto distribution with static probability and FWI as covariates of the scale parameter. The rationale of the procedure is that small fires, assessed by the static probability, have a weak dependence on weather, whereas the larger fires strongly depend on concurrent meteorological conditions. It is shown that observed frequencies of exceedance over the study area for the period 2010-2016 match with the estimated values of probability based on the developed models for static and daily probabilities of exceedance. Some (small) variability is however found between different years suggesting that refinements can be made in future works by using a larger sample to further increase the robustness of the method. The developed methodology presents the advantage of evaluating fire danger with the same criteria for all the study area, making it a good parameter to harmonize fire danger forecasts and forest management studies. Research was performed within the framework of EUMETSAT Satellite Application Facility for Land Surface Analysis (LSA SAF). Part of methods developed and results obtained are on the basis of the platform supported by The Navigator Company that is currently providing information about fire meteorological danger for Portugal for a wide range of users.
Using known populations of pronghorn to evaluate sampling plans and estimators
Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.
1995-01-01
Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.
Change-in-ratio methods for estimating population size
Udevitz, Mark S.; Pollock, Kenneth H.; McCullough, Dale R.; Barrett, Reginald H.
2002-01-01
Change-in-ratio (CIR) methods can provide an effective, low cost approach for estimating the size of wildlife populations. They rely on being able to observe changes in proportions of population subclasses that result from the removal of a known number of individuals from the population. These methods were first introduced in the 1940’s to estimate the size of populations with 2 subclasses under the assumption of equal subclass encounter probabilities. Over the next 40 years, closed population CIR models were developed to consider additional subclasses and use additional sampling periods. Models with assumptions about how encounter probabilities vary over time, rather than between subclasses, also received some attention. Recently, all of these CIR models have been shown to be special cases of a more general model. Under the general model, information from additional samples can be used to test assumptions about the encounter probabilities and to provide estimates of subclass sizes under relaxations of these assumptions. These developments have greatly extended the applicability of the methods. CIR methods are attractive because they do not require the marking of individuals, and subclass proportions often can be estimated with relatively simple sampling procedures. However, CIR methods require a carefully monitored removal of individuals from the population, and the estimates will be of poor quality unless the removals induce substantial changes in subclass proportions. In this paper, we review the state of the art for closed population estimation with CIR methods. Our emphasis is on the assumptions of CIR methods and on identifying situations where these methods are likely to be effective. We also identify some important areas for future CIR research.
Does the probability of developing ocular trauma-related visual deficiency differ between genders?
Blanco-Hernández, Dulce Milagros Razo; Valencia-Aguirre, Jessica Daniela; Lima-Gómez, Virgilio
2011-01-01
Ocular trauma affects males more often than females, but the impact of this condition regarding visual prognosis is unknown. We undertook this study to compare the probability of developing ocular trauma-related visual deficiency between genders, as estimated by the ocular trauma score (OTS). We designed an observational, retrospective, comparative, cross-sectional and open-label study. Female patients aged ≥6 years with ocular trauma were included and matched by age and ocular wall status with male patients at a 1:2 male/female ratio. Initial trauma features and the probability of developing visual deficiency (best corrected visual acuity <20/40) 6 months after the injury, as estimated by the OTS, were compared between genders. The proportion and 95% confidence intervals (95% CI) of visual deficiency 6 months after the injury were estimated. Ocular trauma features and the probability of developing visual deficiency were compared between genders (χ(2) and Fisher's exact test); p value <0.05 was considered significant. Included were 399 eyes (133 from females and 266 from males). Mean age of patients was 25.7 ± 14.6 years. Statistical differences existed in the proportion of zone III in closed globe trauma (p = 0.01) and types A (p = 0.04) and type B (p = 0.02) in open globe trauma. The distribution of the OTS categories was similar for both genders (category 5: p = 0.9); the probability of developing visual deficiency was 32.6% (95% CI = 24.6 to 40.5) in females and 33.2% (95% CI = 27.6 to 38.9) in males (p = 0.9). The probability of developing ocular trauma-related visual deficiency was similar for both genders. The same standard is required.
Dynamic prediction of patient outcomes during ongoing cardiopulmonary resuscitation.
Kim, Joonghee; Kim, Kyuseok; Callaway, Clifton W; Doh, Kibbeum; Choi, Jungho; Park, Jongdae; Jo, You Hwan; Lee, Jae Hyuk
2017-02-01
The probability of the return of spontaneous circulation (ROSC) and subsequent favourable outcomes changes dynamically during advanced cardiac life support (ACLS). We sought to model these changes using time-to-event analysis in out-of-hospital cardiac arrest (OHCA) patients. Adult (≥18 years old), non-traumatic OHCA patients without prehospital ROSC were included. Utstein variables and initial arterial blood gas measurements were used as predictors. The incidence rate of ROSC during the first 30min of ACLS in the emergency department (ED) was modelled using spline-based parametric survival analysis. Conditional probabilities of subsequent outcomes after ROSC (1-week and 1-month survival and 6-month neurologic recovery) were modelled using multivariable logistic regression. The ROSC and conditional probability models were then combined to estimate the likelihood of achieving ROSC and subsequent outcomes by providing k additional minutes of effort. A total of 727 patients were analyzed. The incidence rate of ROSC increased rapidly until the 10th minute of ED ACLS, and it subsequently decreased. The conditional probabilities of subsequent outcomes after ROSC were also dependent on the duration of resuscitation with odds ratios for 1-week and 1-month survival and neurologic recovery of 0.93 (95% CI: 0.90-0.96, p<0.001), 0.93 (0.88-0.97, p=0.001) and 0.93 (0.87-0.99, p=0.031) per 1-min increase, respectively. Calibration testing of the combined models showed good correlation between mean predicted probability and actual prevalence. The probability of ROSC and favourable subsequent outcomes changed according to a multiphasic pattern over the first 30min of ACLS, and modelling of the dynamic changes was feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Chlorine-36 in groundwater of the United States: Empirical data
Davis, S.N.; Moysey, S.; Cecil, L.D.; Zreda, M.
2003-01-01
Natural production of the radionuclide chlorine-36 (36Cl) has provided a valuable tracer for groundwater studies. The nuclear industry, especially the testing of thermonuclear weapons, has also produced large amounts of 36Cl that can be detected in many samples of groundwater. In order to be most useful in hydrologic studies, the natural production prior to 1952 should be distinguished from more recent artificial sources. The object of this study was to reconstruct the probable preanthropogenic levels of 36Cl in groundwater in the United States. Although significant local variations exist, they are superimposed on a broad regional pattern of 36Cl/Cl ratios in the United States. Owing to the influence of atmospherically transported ocean salt, natural ratios of 36Cl/total Cl are lowest near the coast and increase to a maximum in the central Rocky Mountains of the United States.
New trends in gender and mathematics performance: a meta-analysis.
Lindberg, Sara M; Hyde, Janet Shibley; Petersen, Jennifer L; Linn, Marcia C
2010-11-01
In this article, we use meta-analysis to analyze gender differences in recent studies of mathematics performance. First, we meta-analyzed data from 242 studies published between 1990 and 2007, representing the testing of 1,286,350 people. Overall, d = 0.05, indicating no gender difference, and variance ratio = 1.08, indicating nearly equal male and female variances. Second, we analyzed data from large data sets based on probability sampling of U.S. adolescents over the past 20 years: the National Longitudinal Surveys of Youth, the National Education Longitudinal Study of 1988, the Longitudinal Study of American Youth, and the National Assessment of Educational Progress. Effect sizes for the gender difference ranged between -0.15 and +0.22. Variance ratios ranged from 0.88 to 1.34. Taken together, these findings support the view that males and females perform similarly in mathematics.
Polanczyk, C A; Kuntz, K M; Sacks, D B; Johnson, P A; Lee, T H
1999-12-21
Evaluation of acute chest pain is highly variable. To evaluate the cost-effectiveness of strategies using cardiac markers and noninvasive tests for myocardial ischemia. Cost-effectiveness analysis. Prospective data from 1066 patients with chest pain and from the published literature. Patients admitted with acute chest pain. Lifetime. Societal. Creatine kinase (CK)-MB mass assay alone; CK-MB mass assay followed by cardiac troponin I assay if the CK-MB value is normal; CK-MB mass assay followed by troponin I assay if the CK-MB value is normal and electrocardiography shows ischemic changes; both CK-MB mass and troponin I assays; and troponin I assay alone. These strategies were evaluated alone or in combination with early exercise testing. Lifetime cost, life expectancy (in years), and incremental cost-effectiveness. For patients 55 to 64 years of age, measurement of CK-MB mass followed by exercise testing in appropriate patients was the most competitive strategy ($43000 per year of life saved). Measurement of CK-MB mass followed by troponin I measurement had an incremental cost-effectiveness ratio of $47400 per year of life saved for patients 65 to 74 years of age; it was also the most cost-effective strategy when early exercise testing could not be performed, CK-MB values were normal, and ischemic changes were seen on electrocardiography. Results were influenced by age, probability of myocardial infarction, and medical costs. Measurement of CK-MB mass plus early exercise testing is a cost-effective initial strategy for younger patients and those with a low to moderate probability of myocardial infarction. Troponin I measurement can be a cost-effective second test in higher-risk subsets of patients if the CK-MB level is normal and early exercise testing is not an option.
Li, Te-Mao; Yu, Yang-Hao; Tsai, Fuu-Jen; Cheng, Chi-Fung; Wu, Yang-Chang; Ho, Tsung-Jung; Liu, Xiang; Tsang, Hsinyi; Lin, Ting-Hsu; Liao, Chiu-Chu; Huang, Shao-Mei; Li, Ju-Pi; Lin, Jung-Chun; Lin, Chih-Chien; Liang, Wen-Miin; Lin, Ying-Ju
2018-03-01
In Taiwan, lung cancer remains one of the deadliest cancers. Survival of lung cancer patients remains low, ranging from 6% to 18%. Studies have shown that Chinese herbal medicine (CHM) can be used to induce cell apoptosis and exhibit anti-inflammatoryanti-inflammatory activities in cancer cells. This study aimed to investigate the frequencies and patterns of CHM treatment for lung cancer patients and the effect of CHM on their survival probability in Taiwan. We identified 6939 lung cancer patients (ICD-9-CM: 162). We allocated 264 CHM users and 528 CHM-non users, matched for age, gender, duration, and regular treatment. Chi-square test, conditional multivariable logistic regression, Kaplan-Meier method, and the log-rank test were used in this study. The CHM group was characterized by a longer follow up time and more cases of hyperlipidemia and liver cirrhosis. This group exhibited a lower mortality hazard ratio (0.48, 95% confidence interval [0.39-0.61], p < 0.001), after adjusting for comorbidities. The trend was also observed that the cumulative survival probability was higher in CHM than in non-CHM users (p < 0.0001, log rank test). Analysis of their CHM prescription pattern revealed that Bu-Zhong-Yi-Qi-Tang (BZYQT), Xiang-Sha-Liu-Jun-Zi-Tang (XSLJZT), and Bai-He-Gu-Jin-Tang (BHGJT); and Bei-Mu (BM), Xing-Ren (XR) and Ge-Gen (GG) were found to be the top three formulas and herbs, respectively. Among them, BM was the core CHM of the major cluster, and Jie-Geng (JG) and Mai-Men-Dong-Tang (MMDT) were important CHMs by CHM network analysis. The use of CHM as an adjunctive therapy may reduce the mortality hazard ratio of lung cancer patients. The investigation of their comprehensive CHM prescription patterns might be useful in future large-scale, randomized clinical investigations of agent effectiveness, safety, and potential interactions with conventional treatments for lung cancer patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Yeh, Jun-Jun; Neoh, Choo-Aun; Chen, Cheng-Ren; Chou, Christine Yi-Ting; Wu, Ming-Ting
2014-01-01
This study evaluated the use of high-resolution computed tomography (HRCT) to predict the presence of culture-positive pulmonary tuberculosis (PTB) in adult patients with pulmonary lesions in the emergency department (ED). The study included a derivation phase and validation phase with a total of 8,245 patients with pulmonary disease. There were 132 patients with culture-positive PTB in the derivation phase and 147 patients with culture-positive PTB in the validation phase. Imaging evaluation of pulmonary lesions included morphology and segmental distribution. The post-test probability ratios between both phases in three prevalence areas were analyzed. In the derivation phase, a multivariate analysis model identified cavitation, consolidation, and clusters/nodules in right or left upper lobe (except anterior segment) and consolidation of the superior segment of the right or left lower lobe as independent positive factors for culture-positive PTB, while consolidation of the right or left lower lobe (except superior segment) were independent negative factors. An ideal cutoff point based on the receiver operating characteristic (ROC) curve analysis was obtained at a score of 1. The sensitivity, specificity, positivity predictive value, and negative predictive value from derivation phase were 98.5% (130/132), 99.7% (3997/4008), 92.2% (130/141), and 99.9% (3997/3999). Based on the predicted positive likelihood ratio value of 328.33 in derivation phase, the post-test probability was observed to be 91.5% in the derivation phase, 92.5% in the validation phase, 94.5% in a high TB prevalence area, 91.0% in a moderate prevalence area, and 76.8% in moderate-to-low prevalence area. Our model using HRCT, which is feasible to perform in the ED, can promptly diagnose culture-positive PTB in moderate and moderate-to-low prevalence areas.
Imaging markers for Alzheimer disease
Bocchetta, Martina; Chételat, Gael; Rabinovici, Gil D.; de Leon, Mony J.; Kaye, Jeffrey; Reiman, Eric M.; Scheltens, Philip; Barkhof, Frederik; Black, Sandra E.; Brooks, David J.; Carrillo, Maria C.; Fox, Nick C.; Herholz, Karl; Nordberg, Agneta; Jack, Clifford R.; Jagust, William J.; Johnson, Keith A.; Rowe, Christopher C.; Sperling, Reisa A.; Thies, William; Wahlund, Lars-Olof; Weiner, Michael W.; Pasqualetti, Patrizio; DeCarli, Charles
2013-01-01
Revised diagnostic criteria for Alzheimer disease (AD) acknowledge a key role of imaging biomarkers for early diagnosis. Diagnostic accuracy depends on which marker (i.e., amyloid imaging, 18F-fluorodeoxyglucose [FDG]-PET, SPECT, MRI) as well as how it is measured (“metric”: visual, manual, semiautomated, or automated segmentation/computation). We evaluated diagnostic accuracy of marker vs metric in separating AD from healthy and prognostic accuracy to predict progression in mild cognitive impairment. The outcome measure was positive (negative) likelihood ratio, LR+ (LR−), defined as the ratio between the probability of positive (negative) test outcome in patients and the probability of positive (negative) test outcome in healthy controls. Diagnostic LR+ of markers was between 4.4 and 9.4 and LR− between 0.25 and 0.08, whereas prognostic LR+ and LR− were between 1.7 and 7.5, and 0.50 and 0.11, respectively. Within metrics, LRs varied up to 100-fold: LR+ from approximately 1 to 100; LR− from approximately 1.00 to 0.01. Markers accounted for 11% and 18% of diagnostic and prognostic variance of LR+ and 16% and 24% of LR−. Across all markers, metrics accounted for an equal or larger amount of variance than markers: 13% and 62% of diagnostic and prognostic variance of LR+, and 29% and 18% of LR−. Within markers, the largest proportion of diagnostic LR+ and LR− variability was within 18F-FDG-PET and MRI metrics, respectively. Diagnostic and prognostic accuracy of imaging AD biomarkers is at least as dependent on how the biomarker is measured as on the biomarker itself. Standard operating procedures are key to biomarker use in the clinical routine and drug trials. PMID:23897875
Balić, Devleta; Rizvanović, Mirzeta; Cizek-Sajko, Mojca; Balić, Adem
2014-07-01
This study aims to estimate age at onset of natural menopause in domicile and refugee women who lived in Tuzla Canton in Bosnia and Herzegovina during the war (1992-1995) and in the postwar period until the interview. A cross-sectional study was conducted on a sample of 331 postmenopausal women-264 (80%) domicile women and 67 (20%) refugee women-between June 2009 and February 2011. The study encompassed 331 women with a mean age of 57.0 years (range, 39-75 y). The overall mean age at menopause was 49.1 years. The mean age at menopause was higher in domicile women (49.3 y) than in refugee women (48.0 y; unpaired t test, P = 0.023). After adjustment for age at menarche, education, marital status, living place, body mass index, number of abortions, use of contraceptives, and current smoking, only refugee status and parity remained as significant independent predictors of age at menopause (score test, P = 0.025). Refugee women had an increased probability of earlier onset of menopause compared with nonrefugee women (adjusted hazard ratio, 1.33; 95% CI, 1.02-1.75; P = 0.039), whereas there was a decreased probability of experiencing menopause with increasing number of births (adjusted hazard ratio, 0.92; 95% CI, 0.84-0.996; P = 0.04). The age at onset of menopause in refugee women is lower than that in domicile women, indicating that war, independently of other factors, could influence the age when menopause occurs. On average, women who lived in Bosnia and Herzegovina during the war and postwar period entered menopause earlier than did women from Europe.
Brooks, Billy; McBee, Matthew; Pack, Robert; Alamian, Arsham
2017-05-01
Rates of accidental overdose mortality from substance use disorder (SUD) have risen dramatically in the United States since 1990. Between 1999 and 2004 alone rates increased 62% nationwide, with rural overdose mortality increasing at a rate 3 times that seen in urban populations. Cultural differences between rural and urban populations (e.g., educational attainment, unemployment rates, social characteristics, etc.) affect the nature of SUD, leading to disparate risk of overdose across these communities. Multiple-groups latent class analysis with covariates was applied to data from the 2011 and 2012 National Survey on Drug Use and Health (n=12.140) to examine potential differences in latent classifications of SUD between rural and urban adult (aged 18years and older) populations. Nine drug categories were used to identify latent classes of SUD defined by probability of diagnosis within these categories. Once the class structures were established for rural and urban samples, posterior membership probabilities were entered into a multinomial regression analysis of socio-demographic predictors' association with the likelihood of SUD latent class membership. Latent class structures differed across the sub-groups, with the rural sample fitting a 3-class structure (Bootstrap Likelihood Ratio Test P value=0.03) and the urban fitting a 6-class model (Bootstrap Likelihood Ratio Test P value<0.0001). Overall the rural class structure exhibited less diversity in class structure and lower prevalence of SUD in multiple drug categories (e.g. cocaine, hallucinogens, and stimulants). This result supports the hypothesis that different underlying elements exist in the two populations that affect SUD patterns, and thus can inform the development of surveillance instruments, clinical services, and prevention programming tailored to specific communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Matheny, Michael E; Normand, Sharon-Lise T; Gross, Thomas P; Marinac-Dabic, Danica; Loyo-Berrios, Nilsa; Vidi, Venkatesan D; Donnelly, Sharon; Resnic, Frederic S
2011-12-14
Automated adverse outcome surveillance tools and methods have potential utility in quality improvement and medical product surveillance activities. Their use for assessing hospital performance on the basis of patient outcomes has received little attention. We compared risk-adjusted sequential probability ratio testing (RA-SPRT) implemented in an automated tool to Massachusetts public reports of 30-day mortality after isolated coronary artery bypass graft surgery. A total of 23,020 isolated adult coronary artery bypass surgery admissions performed in Massachusetts hospitals between January 1, 2002 and September 30, 2007 were retrospectively re-evaluated. The RA-SPRT method was implemented within an automated surveillance tool to identify hospital outliers in yearly increments. We used an overall type I error rate of 0.05, an overall type II error rate of 0.10, and a threshold that signaled if the odds of dying 30-days after surgery was at least twice than expected. Annual hospital outlier status, based on the state-reported classification, was considered the gold standard. An event was defined as at least one occurrence of a higher-than-expected hospital mortality rate during a given year. We examined a total of 83 hospital-year observations. The RA-SPRT method alerted 6 events among three hospitals for 30-day mortality compared with 5 events among two hospitals using the state public reports, yielding a sensitivity of 100% (5/5) and specificity of 98.8% (79/80). The automated RA-SPRT method performed well, detecting all of the true institutional outliers with a small false positive alerting rate. Such a system could provide confidential automated notification to local institutions in advance of public reporting providing opportunities for earlier quality improvement interventions.
The striking similarities between standard, distractor-free, and target-free recognition
Dobbins, Ian G.
2012-01-01
It is often assumed that observers seek to maximize correct responding during recognition testing by actively adjusting a decision criterion. However, early research by Wallace (Journal of Experimental Psychology: Human Learning and Memory 4:441–452, 1978) suggested that recognition rates for studied items remained similar, regardless of whether or not the tests contained distractor items. We extended these findings across three experiments, addressing whether detection rates or observer confidence changed when participants were presented standard tests (targets and distractors) versus “pure-list” tests (lists composed entirely of targets or distractors). Even when observers were made aware of the composition of the pure-list test, the endorsement rates and confidence patterns remained largely similar to those observed during standard testing, suggesting that observers are typically not striving to maximize the likelihood of success across the test. We discuss the implications for decision models that assume a likelihood ratio versus a strength decision axis, as well as the implications for prior findings demonstrating large criterion shifts using target probability manipulations. PMID:21476108
Pfleger, C C H; Flachs, E M; Koch-Henriksen, Nils
2010-07-01
There is a need for follow-up studies of the familial situation of multiple sclerosis (MS) patients. To evaluate the probability of MS patients to remain in marriage or relationship with the same partner after onset of MS in comparison with the population. All 2538 Danes with onset of MS 1980-1989, retrieved from the Danish MS-Registry, and 50,760 matched and randomly drawn control persons were included. Information on family status was retrieved from Statistics Denmark. Cox analyses were used with onset as starting point. Five years after onset, the cumulative probability of remaining in the same relationship was 86% in patients vs. 89% in controls. The probabilities continued to deviate, and at 24 years, the probability was 33% in patients vs. 53% in the control persons (p < 0.001). Among patients with young onset (< 36 years of age), those with no children had a higher risk of divorce than those having children less than 7 years (Hazard Ratio 1.51; p < 0.0001), and men had a higher risk of divorce than women (Hazard Ratio 1.33; p < 0.01). MS significantly affects the probability of remaining in the same relationship compared with the background population.
A historical analysis of Plinian unrest and the key promoters of explosive activity.
NASA Astrophysics Data System (ADS)
Winson, A. E. G.; Newhall, C. G.; Costa, F.
2015-12-01
Plinian eruptions are the largest historically recorded volcanic phenomena, and have the potential to be widely destructive. Yet when a volcano becomes newly restless we are unable to anticipate whether or not a large eruption is imminent. We present the findings from a multi-parametric study of 42 large explosive eruptions (29 Plinian and 13 Sub-plinian) that form the basis for a new Bayesian Belief network that addresses this question. We combine the eruptive history of the volcanoes that have produced these large eruptions with petrological studies, and reported unrest phenomena to assess the probability of an eruption being plinian. We find that the 'plinian probability' is increased most strongly by the presence of an exsolved volatile phase in the reservoir prior to an eruption. In our survey 60% of the plinian eruptions, had an excess SO2 gas phase of more than double than it is calculated by petrologic studies alone. Probability is also increased by three related and more easily observable parameters: a high plinian Ratio (that is the ratio of VEI≥4 eruptions in a volcanoes history to the number of all VEI≥2 eruptions in the history), a repose time of more than 1000 years, and a Repose Ratio (the ratio of the average return of VEI≥4 eruptions in the volcanic record to the repose time since the last VEI≥4) of greater than 0.7. We looked for unrest signals that potentially are indicative of future plinian activity and report a few observations from case studies but cannot say if these will generally appear. Finally we present a retrospective analysis of the probabilities of eruptions in our study becoming plinian, using our Bayesian belief network. We find that these probabilities are up to about 4 times greater than those calculate from an a priori assessment of the global eruptive catalogue.
Prospect evaluation as a function of numeracy and probability denominator.
Millroth, Philip; Juslin, Peter
2015-05-01
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants' psychophysical response to probability and value. Standard methods in decision research may thus confound people's genuine risk attitude with their numerical capacities and the probability format used. Copyright © 2015 Elsevier B.V. All rights reserved.
Laser Ignition Microthruster Experiments on KKS-1
NASA Astrophysics Data System (ADS)
Nakano, Masakatsu; Koizumi, Hiroyuki; Watanabe, Masashi; Arakawa, Yoshihiro
A laser ignition microthruster has been developed for microsatellites. Thruster performances such as impulse and ignition probability were measured, using boron potassium nitrate (B/KNO3) solid propellant ignited by a 1 W CW laser diode. The measured impulses were 60 mNs ± 15 mNs with almost 100 % ignition probability. The effect of the mixture ratios of B/KNO3 on thruster performance was also investigated, and it was shown that mixture ratios between B/KNO3/binder = 28/70/2 and 38/60/2 exhibited both high ignition probability and high impulse. Laser ignition thrusters designed and fabricated based on these data became the first non-conventional microthrusters on the Kouku Kousen Satellite No. 1 (KKS-1) microsatellite that was launched by a H2A rocket as one of six piggyback satellites in January 2009.
Students' Understanding of Conditional Probability on Entering University
ERIC Educational Resources Information Center
Reaburn, Robyn
2013-01-01
An understanding of conditional probability is essential for students of inferential statistics as it is used in Null Hypothesis Tests. Conditional probability is also used in Bayes' theorem, in the interpretation of medical screening tests and in quality control procedures. This study examines the understanding of conditional probability of…
Biggs, Holly M.; Galloway, Renee L.; Bui, Duy M.; Morrissey, Annie B.; Maro, Venance P.
2013-01-01
Abstract Background Leptospirosis and human immunodeficiency virus (HIV) infection are prevalent in many areas, including northern Tanzania, yet little is known about their interaction. Methods We enrolled febrile inpatients at two hospitals in Moshi, Tanzania, over 1 year and performed HIV antibody testing and the microscopic agglutination test (MAT) for leptospirosis. Confirmed leptospirosis was defined as ≥four-fold rise in MAT titer between acute and convalescent serum samples, and probable leptospirosis was defined as any reciprocal MAT titer ≥800. Results Confirmed or probable leptospirosis was found in 70 (8.4%) of 831 participants with at least one serum sample tested. At total of 823 (99.0%) of 831 participants had HIV testing performed, and 203 (24.7%) were HIV infected. Among HIV-infected participants, 9 (4.4%) of 203 had confirmed or probable leptospirosis, whereas among HIV-uninfected participants 61 (9.8%) of 620 had leptospirosis. Leptospirosis was less prevalent among HIV-infected as compared to HIV-uninfected participants [odds ratio (OR) 0.43, p=0.019]. Among those with leptospirosis, HIV-infected patients more commonly presented with features of severe sepsis syndrome than HIV-uninfected patients, but differences were not statistically significant. Among HIV-infected patients, severe immunosuppression was not significantly different between those with and without leptospirosis (p=0.476). Among HIV-infected adolescents and adults, median CD4 percent and median CD4 count were higher among those with leptospirosis as compared to those with other etiologies of febrile illness, but differences in CD4 count did not reach statistical significance (p=0.015 and p=0.089, respectively). Conclusions Among febrile inpatients in northern Tanzania, leptospirosis was not more prevalent among HIV-infected patients. Although some indicators of leptospirosis severity were more common among HIV-infected patients, a statistically significant difference was not demonstrated. Among HIV-infected patients, those with leptospirosis were not more immunosuppressed relative to those with other etiologies of febrile illness. PMID:23663165
NASA Astrophysics Data System (ADS)
Zoriy, Miroslav V.; Ostapczuk, Peter; Halicz, Ludwik; Hille, Ralf; Becker, J. Sabine
2005-04-01
A sensitive analytical method for determining the artificial radionuclides 90Sr, 239Pu and 240Pu at the ultratrace level in groundwater samples from the Semipalatinsk Test Site area in Kazakhstan by double-focusing sector field inductively coupled plasma mass spectrometry (ICP-SFMS) was developed. In order to avoid possible isobaric interferences at m/z 90 for 90Sr determination (e.g. 90Zr+, 40Ar50Cr+, 36Ar54Fe+, 58Ni16O2+, 180Hf2+, etc.), the measurements were performed at medium mass resolution under cold plasma conditions. Pu was separated from uranium by means of extraction chromatography using Eichrom TEVA resin with a recovery of 83%. The limits of detection for 90Sr, 239Pu and 240Pu in water samples were determined as 11, 0.12 and 0.1 fg ml-1, respectively. Concentrations of 90Sr and 239Pu in contaminated groundwater samples ranged from 18 to 32 and from 28 to 856 fg ml-1, respectively. The 240Pu/239Pu isotopic ratio in groundwater samples was measured as 0.17. This isotope ratio indicates that the most probable source of contamination of the investigated groundwater samples was the nuclear weapons tests at the Semipalatinsk Test Site conducted by the USSR in the 1960s.
Tests for senescent decline in annual survival probabilities of common pochards, Aythya ferina
Nichols, J.D.; Hines, J.E.; Blums, P.
1997-01-01
Senescent decline in survival probabilities of animals is a topic about which much has been written but little is known. Here, we present formal tests of senescence hypotheses, using 1373 recaptures from 8877 duckling (age 0) and 504 yearling Common Pochards (Aythya ferina) banded at a Latvian study site, 1975-1992. The tests are based on capture-recapture models that explicitly incorporate sampling probabilities that, themselves, may exhibit timeand age-specific variation. The tests provided no evidence of senescent decline in survival probabilities for this species. Power of the most useful test was low for gradual declines in annual survival probability with age, but good for steeper declines. We recommend use of this type of capture-recapture modeling and analysis for other investigations of senescence in animal survival rates.
ERIC Educational Resources Information Center
van der Linden, Wim J.
2011-01-01
It is shown how the time limit on a test can be set to control the probability of a test taker running out of time before completing it. The probability is derived from the item parameters in the lognormal model for response times. Examples of curves representing the probability of running out of time on a test with given parameters as a function…
Estimation of post-test probabilities by residents: Bayesian reasoning versus heuristics?
Hall, Stacey; Phang, Sen Han; Schaefer, Jeffrey P; Ghali, William; Wright, Bruce; McLaughlin, Kevin
2014-08-01
Although the process of diagnosing invariably begins with a heuristic, we encourage our learners to support their diagnoses by analytical cognitive processes, such as Bayesian reasoning, in an attempt to mitigate the effects of heuristics on diagnosing. There are, however, limited data on the use ± impact of Bayesian reasoning on the accuracy of disease probability estimates. In this study our objective was to explore whether Internal Medicine residents use a Bayesian process to estimate disease probabilities by comparing their disease probability estimates to literature-derived Bayesian post-test probabilities. We gave 35 Internal Medicine residents four clinical vignettes in the form of a referral letter and asked them to estimate the post-test probability of the target condition in each case. We then compared these to literature-derived probabilities. For each vignette the estimated probability was significantly different from the literature-derived probability. For the two cases with low literature-derived probability our participants significantly overestimated the probability of these target conditions being the correct diagnosis, whereas for the two cases with high literature-derived probability the estimated probability was significantly lower than the calculated value. Our results suggest that residents generate inaccurate post-test probability estimates. Possible explanations for this include ineffective application of Bayesian reasoning, attribute substitution whereby a complex cognitive task is replaced by an easier one (e.g., a heuristic), or systematic rater bias, such as central tendency bias. Further studies are needed to identify the reasons for inaccuracy of disease probability estimates and to explore ways of improving accuracy.
Taylor, Darlene; Durigon, Monica; Davis, Heather; Archibald, Chris; Konrad, Bernhard; Coombs, Daniel; Gilbert, Mark; Cook, Darrel; Krajden, Mel; Wong, Tom; Ogilvie, Gina
2015-03-01
Failure to understand the risk of false-negative HIV test results during the window period results in anxiety. Patients typically want accurate test results as soon as possible while clinicians prefer to wait until the probability of a false-negative is virtually nil. This review summarizes the median window periods for third-generation antibody and fourth-generation HIV tests and provides the probability of a false-negative result for various days post-exposure. Data were extracted from published seroconversion panels. A 10-day eclipse period was used to estimate days from infection to first detection of HIV RNA. Median (interquartile range) days to seroconversion were calculated and probabilities of a false-negative result at various time periods post-exposure are reported. The median (interquartile range) window period for third-generation tests was 22 days (19-25) and 18 days (16-24) for fourth-generation tests. The probability of a false-negative result is 0.01 at 80 days' post-exposure for third-generation tests and at 42 days for fourth-generation tests. The table of probabilities of falsely-negative HIV test results may be useful during pre- and post-test HIV counselling to inform co-decision making regarding the ideal time to test for HIV. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
Ermertcan, Aylin Türel; Oztürk, Ferdi; Gençoğlan, Gülsüm; Eskiizmir, Görkem; Temiz, Peyker; Horasan, Gönül Dinç
2011-03-01
The precision of clinical diagnosis of skin tumors is not commonly measured and, therefore, very little is known about the diagnostic ability of clinicians. This study aimed to compare clinical and histopathologic diagnoses of nonmelanoma skin cancers with regard to sensitivity, predictive values, pretest-posttest probabilities, and likelihood ratios. Two hundred nineteen patients with 241 nonmelanoma skin cancers were enrolled in this study. Of these patients, 49.4% were female and 50.6% were male. The mean age ± standard deviation (SD) was 63.66 ± 16.44 years for the female patients and 64.77 ± 14.88 years for the male patients. The mean duration of the lesions was 20.90 ± 32.95 months. One hundred forty-eight (61.5%) of the lesions were diagnosed as basal cell carcinoma (BCC) and 93 (38.5%) were diagnosed as squamous cell carcinoma (SCC) histopathologically. Sensitivity, positive predictive value, and posttest probability were calculated as 75.96%, 87.77%, and 87.78% for BCC and 70.37%, 37.25%, and 37.20% for SCC, respectively. The correlation between clinical and histopathologic diagnoses was found to be higher in BCC. Knowledge of sensitivity, predictive values, likelihood ratios, and posttest probabilities may have implications for the management of skin cancers. To prevent unnecessary surgeries and achieve high diagnostic accuracies, multidisciplinary approaches are recommended.
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Brucellosis among Hospitalized Febrile Patients in Northern Tanzania
Bouley, Andrew J.; Biggs, Holly M.; Stoddard, Robyn A.; Morrissey, Anne B.; Bartlett, John A.; Afwamba, Isaac A.; Maro, Venance P.; Kinabo, Grace D.; Saganda, Wilbrod; Cleaveland, Sarah; Crump, John A.
2012-01-01
Acute and convalescent serum samples were collected from febrile inpatients identified at two hospitals in Moshi, Tanzania. Confirmed brucellosis was defined as a positive blood culture or a ≥ 4-fold increase in microagglutination test titer, and probable brucellosis was defined as a single reciprocal titer ≥ 160. Among 870 participants enrolled in the study, 455 (52.3%) had paired sera available. Of these, 16 (3.5%) met criteria for confirmed brucellosis. Of 830 participants with ≥ 1 serum sample, 4 (0.5%) met criteria for probable brucellosis. Brucellosis was associated with increased median age (P = 0.024), leukopenia (odds ratio [OR] 7.8, P = 0.005), thrombocytopenia (OR 3.9, P = 0.018), and evidence of other zoonoses (OR 3.2, P = 0.026). Brucellosis was never diagnosed clinically, and although all participants with brucellosis received antibacterials or antimalarials in the hospital, no participant received standard brucellosis treatment. Brucellosis is an underdiagnosed and untreated cause of febrile disease among hospitalized adult and pediatric patients in northern Tanzania. PMID:23091197
Leptospirosis among Hospitalized Febrile Patients in Northern Tanzania
Biggs, Holly M.; Bui, Duy M.; Galloway, Renee L.; Stoddard, Robyn A.; Shadomy, Sean V.; Morrissey, Anne B.; Bartlett, John A.; Onyango, Jecinta J.; Maro, Venance P.; Kinabo, Grace D.; Saganda, Wilbrod; Crump, John A.
2011-01-01
We enrolled consecutive febrile admissions to two hospitals in Moshi, Tanzania. Confirmed leptospirosis was defined as a ≥ 4-fold increase in microscopic agglutination test (MAT) titer; probable leptospirosis as reciprocal MAT titer ≥ 800; and exposure to pathogenic leptospires as titer ≥ 100. Among 870 patients enrolled in the study, 453 (52.1%) had paired sera available, and 40 (8.8%) of these met the definition for confirmed leptospirosis. Of 832 patients with ≥ 1 serum sample available, 30 (3.6%) had probable leptospirosis and an additional 277 (33.3%) had evidence of exposure to pathogenic leptospires. Among those with leptospirosis the most common clinical diagnoses were malaria in 31 (44.3%) and pneumonia in 18 (25.7%). Leptospirosis was associated with living in a rural area (odds ratio [OR] 3.4, P < 0.001). Among those with confirmed leptospirosis, the predominant reactive serogroups were Mini and Australis. Leptospirosis is a major yet underdiagnosed cause of febrile illness in northern Tanzania, where it appears to be endemic. PMID:21813847
Measurement of satellite PCS fading using GPS
NASA Technical Reports Server (NTRS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1995-01-01
A six-channel commercial GPS receiver with a custom-made 40 deg tilted, rotating antenna has been assembled to make fade measurements for personal satellite communications. The system can measure up to two times per minute fades of up to 15 dB in the direction of each tracked satellite from 10 to 90 deg elevation. Photographic fisheye lens images were used to categorize the fade data obtained in several test locations according to fade states of clear, shadowed, or blocked. Multipath effects in the form of annular rings can be observed when most of the sky is clear. Tree fading by a Pecan exceeding 3.5 dB and 12 dB at 50 to 10 percent probability, respectively, compared with median fades of 7.5 dB measured earlier and the discrepancy is attributed to the change in ratio when measuring over an area as opposed to along a line. Data acquired inside buildings revealed 'rf-leaky' ceilings. Satellite diversity gain in a shadowed environment exceeded 6 dB at the 10 percent probability.
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1997-04-25
A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.
Measurement of satellite PCS fading using GPS
NASA Astrophysics Data System (ADS)
Vogel, Wolfhard J.; Torrence, Geoffrey W.
1995-08-01
A six-channel commercial GPS receiver with a custom-made 40 deg tilted, rotating antenna has been assembled to make fade measurements for personal satellite communications. The system can measure up to two times per minute fades of up to 15 dB in the direction of each tracked satellite from 10 to 90 deg elevation. Photographic fisheye lens images were used to categorize the fade data obtained in several test locations according to fade states of clear, shadowed, or blocked. Multipath effects in the form of annular rings can be observed when most of the sky is clear. Tree fading by a Pecan exceeding 3.5 dB and 12 dB at 50 to 10 percent probability, respectively, compared with median fades of 7.5 dB measured earlier and the discrepancy is attributed to the change in ratio when measuring over an area as opposed to along a line. Data acquired inside buildings revealed 'rf-leaky' ceilings. Satellite diversity gain in a shadowed environment exceeded 6 dB at the 10 percent probability.
NASA Astrophysics Data System (ADS)
Evans, Denis J.; Searles, Debra J.; Williams, Stephen R.
2010-01-01
We study the statistical mechanics of thermal conduction in a classical many-body system that is in contact with two thermal reservoirs maintained at different temperatures. The ratio of the probabilities, that when observed for a finite time, the time averaged heat flux flows in and against the direction required by Fourier's Law for heat flow, is derived from first principles. This result is obtained using the transient fluctuation theorem. We show that the argument of that theorem, namely, the dissipation function is, close to equilibrium, equal to a microscopic expression for the entropy production. We also prove that if transient time correlation functions of smooth zero mean variables decay to zero at long times, the system will relax to a unique nonequilibrium steady state, and for this state, the thermal conductivity must be positive. Our expressions are tested using nonequilibrium molecular dynamics simulations of heat flow between thermostated walls.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Faria Alves, Miguel; Ferreira, António Miguel; Cardoso, Gonçalo; Saraiva Lopes, Ricardo; Correia, Maria da Graça; Machado Gil, Victor
2013-03-01
The purpose of this study was to assess the change in theoretical probability of coronary artery disease (CAD) in patients with suspected CAD undergoing coronary CT angiography (CCTA) as first line test vs. patients who underwent CCTA after an exercise ECG. Pre- and post-test probabilities of CAD were assessed in 158 patients with suspected CAD undergoing dual-source CCTA as the first-line test (Group A) and in 134 in whom CCTA was performed after an exercise ECG (Group B). Pre-test probabilities were calculated based on age, gender and type of chest pain. Post-test probabilities were calculated according to Bayes' theorem. There were no significant differences between the groups regarding pre-test probability (median 23.5% [13.3-37.8] in group A vs. 20.5% [13.4-34.5] in group B; p=0,479). In group A, the percentage of patients with intermediate likelihood of disease (10-90%) was 90% before testing and 15% after CCTA (p<0,001), while in group B, it was 95% before testing, 87% after exercise ECG (p=NS), and 17% after CCTA (p<0,001). Unlike exercise testing, CCTA is able to reclassify the risk in the majority of patients with an intermediate probability of obstructive CAD. The use of CCTA as a first-line diagnostic test for CAD may be beneficial in this setting. Copyright © 2012 Sociedade Portuguesa de Cardiologia. Published by Elsevier España. All rights reserved.
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Roux, C Z
2009-05-01
Short phylogenetic distances between taxa occur, for example, in studies on ribosomal RNA-genes with slow substitution rates. For consistently short distances, it is proved that in the completely singular limit of the covariance matrix ordinary least squares (OLS) estimates are minimum variance or best linear unbiased (BLU) estimates of phylogenetic tree branch lengths. Although OLS estimates are in this situation equal to generalized least squares (GLS) estimates, the GLS chi-square likelihood ratio test will be inapplicable as it is associated with zero degrees of freedom. Consequently, an OLS normal distribution test or an analogous bootstrap approach will provide optimal branch length tests of significance for consistently short phylogenetic distances. As the asymptotic covariances between branch lengths will be equal to zero, it follows that the product rule can be used in tree evaluation to calculate an approximate simultaneous confidence probability that all interior branches are positive.
Abdominal pain and nausea in the diagnosis of streptococcal pharyngitis in boys
Igarashi, Hiroshi; Nago, Naoki; Kiyokawa, Hiromichi; Fukushi, Motoharu
2017-01-01
Objectives This study was designed to assess the accuracy of gastrointestinal symptoms, including abdominal pain, nausea, and vomiting, in the diagnosis of Group A streptococcal (GAS) pharyngitis in children and to determine differences in diagnostic accuracy in boys versus girls. Methods This retrospective cross-sectional study included 5,755 consecutive patients aged <15 years with fever in the electronic database at a primary care practice. Gastrointestinal symptoms were recorded in the database according to the International Classification of Primary Care codes, and the data were extracted electronically. The reference standard was GAS pharyngitis diagnosed with a rapid test. Patients with a clinical diagnosis of probable GAS pharyngitis were excluded from the primary analysis. Results Among the 5,755 children with fever, 331 (5.8%) were coded as having GAS pharyngitis, including 218 (65.9%) diagnosed with rapid tests and 113 (34.1%) clinically diagnosed with probable GAS pharyngitis. Among patients with fever and abdominal pain, rapid-test-confirmed GAS pharyngitis was significantly more common in boys (11/120, 9.2%) than in girls (3/128, 2.3%; p=0.026). The positive likelihood ratio of abdominal pain was 1.49 (95% CI =0.88–2.51): 2.41 (95% CI =1.33–4.36) in boys and 0.63 (95% CI =0.20–1.94) in girls. The positive likelihood ratio of nausea was 2.05 (95% CI =1.06–4.00): 2.74 (95% CI =1.28–5.86) in boys and 1.09 (95% CI =0.27–4.42) in girls. The association between abdominal pain and GAS pharyngitis was stronger in boys aged <6 years than in boys aged 6–15 years. Conclusion Abdominal pain and nausea were associated with GAS pharyngitis in boys, but not in girls. Abdominal pain and nausea may help determine the suitability of rapid tests in younger boys with fever and other clinical findings consistent with GAS pharyngitis, even in the absence of sore throat. PMID:28989283
Pedersen, Anette F; Carlsen, Anders H; Vedsted, Peter
2015-01-01
Background Rates of prostate specific antigen (PSA) test ordering vary among GPs. Aim To examine whether GPs’ risk attitude, level of empathy, and burnout status are associated with PSA testing. Design and setting Register and questionnaire study including 129 solo GPs (active in the Central Denmark Region) and 76 672 of their adult male patients with no history of or current prostate cancer diagnosis. Method PSA tests from 2012 were retrieved from a register and classified as incident (that is, the first PSA test within 24 months), repeated normal, or repeated raised tests. This was merged with information on GPs’ risk attitudes, empathy, and burnout status from a 2012 survey. Results Patients registered with a GP with a high score on anxiety caused by uncertainty (odds ratio [OR] 1.03, 95% confidence interval [CI] = 1.00 to 1.06, P = 0.025) or concern about bad outcomes (OR 1.04; 95% CI = 1.00 to 1.08, P = 0.034) were more likely to have an incident PSA test, whereas those registered with a GP with increased tolerance for ambiguity were less likely (OR 0.98, 95% CI = 0.96 to 1.00, P = 0.025). Patients registered with a GP reporting high tolerance for ambiguity (OR 0.96, 95% CI = 0.94 to 0.99, P = 0.009) or high propensity to risk-taking (OR 0.97, 95% CI = 0.93 to 1.00, P = 0.047) were less likely to have a repeated normal PSA test. Conclusion Various aspects of GPs’ risk-taking attitudes were associated with patients’ probability of having an incident and a repeated normal PSA test. The probability of having a repeated raised PSA test was not influenced by any of the psychological factors. Burnout and empathy were not associated with PSA testing. PMID:26541183
Koffi, Ange; Danel, Christine; Ouassa, Timothée; Blehoué, Marcel-Angora; Ouattara, Eric; Assemien, Jeanne-d’Arc; Masumbuko, Jean-Marie; Coffie, Patrick; Cartier, Nathalie; Laurent, Arnaud; Raguin, Gilles; Malvy, Denis; N’Dri-Yoman, Thérèse; Eholié, Serge P.; Domoua, Serge K.
2017-01-01
Background In Côte d’Ivoire, a TB prison program has been developed since 1999. This program includes offering TB screening to prisoners who show up with TB symptoms at the infirmary. Our objective was to estimate the prevalence of pulmonary TB among inmates at the Correctional and Detention Facility of Abidjan, the largest prison of Côte d’Ivoire, 16 years after this TB program was implemented. Methods Between March and September 2015, inmates, were screened for pulmonary TB using systematic direct smear microscopy, culture and chest X-ray. All participants were also proposed HIV testing. TB was defined as either confirmed (positive culture), probable (positive microscopy and/or chest X-ray findings suggestive of TB) or possible (signs or symptoms suggestive of TB, no X-Ray or microbiological evidence). Factors associated with confirmed tuberculosis were analysed using multivariable logistic regression. Results Among the 943 inmates screened, 88 (9.3%) met the TB case definition, including 19 (2.0%) with confirmed TB, 40 (4.2%) with probable TB and 29 (3.1%) with possible TB. Of the 19 isolated TB strains, 10 (53%) were TB drug resistant, including 7 (37%) with multi-resistance. Of the 10 patients with TB resistant strain, only one had a past history of TB treatment. HIV prevalence was 3.1% overall, and 9.6%among TB cases. Factors associated with confirmed TB were age ≥30 years (Odds Ratio 3.8; 95% CI 1.1–13.3), prolonged cough (Odds Ratio 3.6; 95% CI 1.3–9.5) and fever (Odds Ratio 2.7; 95% CI 1.0–7.5). Conclusion In the country largest prison, pulmonary TB is still 10 (confirmed) to 44 times (confirmed, probable or possible) as frequent as in the Côte d’Ivoire general population, despite a long-time running symptom-based program of TB detection. Decreasing TB prevalence and limiting the risk of MDR may require the implementation of annual in-cell TB screening campaigns that systematically target all prison inmates. PMID:28759620
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detrano, R.; Yiannikas, J.; Salcedo, E.E.
One hundred fifty-four patients referred for coronary arteriography were prospectively studied with stress electrocardiography, stress thallium scintigraphy, cine fluoroscopy (for coronary calcifications), and coronary angiography. Pretest probabilities of coronary disease were determined based on age, sex, and type of chest pain. These and pooled literature values for the conditional probabilities of test results based on disease state were used in Bayes theorem to calculate posttest probabilities of disease. The results of the three noninvasive tests were compared for statistical independence, a necessary condition for their simultaneous use in Bayes theorem. The test results were found to demonstrate pairwise independence inmore » patients with and those without disease. Some dependencies that were observed between the test results and the clinical variables of age and sex were not sufficient to invalidate application of the theorem. Sixty-eight of the study patients had at least one major coronary artery obstruction of greater than 50%. When these patients were divided into low-, intermediate-, and high-probability subgroups according to their pretest probabilities, noninvasive test results analyzed by Bayesian probability analysis appropriately advanced 17 of them by at least one probability subgroup while only seven were moved backward. Of the 76 patients without disease, 34 were appropriately moved into a lower probability subgroup while 10 were incorrectly moved up. We conclude that posttest probabilities calculated from Bayes theorem more accurately classified patients with and without disease than did pretest probabilities, thus demonstrating the utility of the theorem in this application.« less
Deviation from Power Law Behavior in Landslide Phenomenon
NASA Astrophysics Data System (ADS)
Li, L.; Lan, H.; Wu, Y.
2013-12-01
Power law distribution of magnitude is widely observed in many natural hazards (e.g., earthquake, floods, tornadoes, and forest fires). Landslide is unique as the size distribution of landslide is characterized by a power law decrease with a rollover in the small size end. Yet, the emergence of the rollover, i.e., the deviation from power law behavior for small size landslides, remains a mystery. In this contribution, we grouped the forces applied on landslide bodies into two categories: 1) the forces proportional to the volume of failure mass (gravity and friction), and 2) the forces proportional to the area of failure surface (cohesion). Failure occurs when the forces proportional to volume exceed the forces proportional to surface area. As such, given a certain mechanical configuration, the failure volume to failure surface area ratio must exceed a corresponding threshold to guarantee a failure. Assuming all landslides share a uniform shape, which means the volume to surface area ratio of landslide regularly increase with the landslide volume, a cutoff of landslide volume distribution in the small size end can be defined. However, in realistic landslide phenomena, where heterogeneities of landslide shape and mechanical configuration are existent, a simple cutoff of landslide volume distribution does not exist. The stochasticity of landslide shape introduce a probability distribution of the volume to surface area ratio with regard to landslide volume, with which the probability that the volume to surface ratio exceed the threshold can be estimated regarding values of landslide volume. An experiment based on empirical data showed that this probability can induce the power law distribution of landslide volume roll down in the small size end. We therefore proposed that the constraints on the failure volume to failure surface area ratio together with the heterogeneity of landslide geometry and mechanical configuration attribute for the deviation from power law behavior in landslide phenomenon. Figure shows that a rollover of landslide size distribution in the small size end is produced as the probability for V/S (the failure volume to failure surface ratio of landslide) exceeding the mechanical threshold applied to the power law distribution of landslide volume.
A population-based study of stimulant drug treatment of ADHD and academic progress in children.
Zoëga, Helga; Rothman, Kenneth J; Huybrechts, Krista F; Ólafsson, Örn; Baldursson, Gísli; Almarsdóttir, Anna B; Jónsdóttir, Sólveig; Halldórsson, Matthías; Hernández-Diaz, Sonia; Valdimarsdóttir, Unnur A
2012-07-01
We evaluated the hypothesis that later start of stimulant treatment of attention-deficit/hyperactivity disorder adversely affects academic progress in mathematics and language arts among 9- to 12-year-old children. We linked nationwide data from the Icelandic Medicines Registry and the Database of National Scholastic Examinations. The study population comprised 11,872 children born in 1994-1996 who took standardized tests in both fourth and seventh grade. We estimated the probability of academic decline (drop of ≥ 5.0 percentile points) according to drug exposure and timing of treatment start between examinations. To limit confounding by indication, we concentrated on children who started treatment either early or later, but at some point between fourth-grade and seventh-grade standardized tests. In contrast with nonmedicated children, children starting stimulant treatment between their fourth- and seventh-grade tests were more likely to decline in test performance. The crude probability of academic decline was 72.9% in mathematics and 42.9% in language arts for children with a treatment start 25 to 36 months after the fourth-grade test. Compared with those starting treatment earlier (≤ 12 months after tests), the multivariable adjusted risk ratio (RR) for decline was 1.7 (95% confidence interval [CI]: 1.2-2.4) in mathematics and 1.1 (95% CI: 0.7-1.8) in language arts. The adjusted RR of mathematics decline with later treatment was higher among girls (RR, 2.7; 95% CI: 1.2-6.0) than boys (RR, 1.4; 95% CI: 0.9-2.0). Later start of stimulant drug treatment of attention-deficit/hyperactivity disorder is associated with academic decline in mathematics.
Odds Ratio Product of Sleep EEG as a Continuous Measure of Sleep State
Younes, Magdy; Ostrowski, Michele; Soiferman, Marc; Younes, Henry; Younes, Mark; Raneri, Jill; Hanly, Patrick
2015-01-01
Study Objectives: To develop and validate an algorithm that provides a continuous estimate of sleep depth from the electroencephalogram (EEG). Design: Retrospective analysis of polysomnograms. Setting: Research laboratory. Participants: 114 patients who underwent clinical polysomnography in sleep centers at the University of Manitoba (n = 58) and the University of Calgary (n = 56). Interventions: None. Measurements and Results: Power spectrum of EEG was determined in 3-second epochs and divided into delta, theta, alpha-sigma, and beta frequency bands. The range of powers in each band was divided into 10 aliquots. EEG patterns were assigned a 4-digit number that reflects the relative power in the 4 frequency ranges (10,000 possible patterns). Probability of each pattern occurring in 30-s epochs staged awake was determined, resulting in a continuous probability value from 0% to 100%. This was divided by 40 (% of epochs staged awake) producing the odds ratio product (ORP), with a range of 0–2.5. In validation testing, average ORP decreased progressively as EEG progressed from wakefulness (2.19 ± 0.29) to stage N3 (0.13 ± 0.05). ORP < 1.0 predicted sleep and ORP > 2.0 predicted wakefulness in > 95% of 30-s epochs. Epochs with intermediate ORP occurred in unstable sleep with a high arousal index (> 70/h) and were subject to much interrater scoring variability. There was an excellent correlation (r2 = 0.98) between ORP in current 30-s epochs and the likelihood of arousal or awakening occurring in the next 30-s epoch. Conclusions: Our results support the use of the odds ratio product (ORP) as a continuous measure of sleep depth. Citation: Younes M, Ostrowski M, Soiferman M, Younes H, Younes M, Raneri J, Hanly P. Odds ratio product of sleep EEG as a continuous measure of sleep state. SLEEP 2015;38(4):641–654. PMID:25348125
Maceral distributions in Illinois coals and their paleoenvironmental implications
Harvey, R.D.; Dillon, J.W.
1985-01-01
For purposes of assessing the maceral distribution of Illinois (U.S.A.) coals analyses were assembled for 326 face channel and drill core samples from 24 coal members of the Pennsylvanian System. The inertinite content of coals from the Missourian and Virgilian Series averages 16.1% (mineral free), compared to 9.4% for older coals from the Desmoinesian and older Series. This indicates there was generally a higher state of oxidation in the peat that formed the younger coals. This state probably resulted from greater exposure of these peats to weathering as the climate became drier and the water table lower than was the case for the older coals, although oxidation during allochthonous deposition of inertinite components is a genetic factor that needs further study to confirm the importance of the climate. Regional variation of the vitrinite-inertinite ratio (V-I), on a mineral- and micrinite-free basis, was observed in the Springfield (No. 5) and Herrin (No. 6) Coal Members to be related to the geographical position of paleochannel (river) deposits known to have been contemporaneous with the peats that formed these two coal strata. The V-I ratio is highest (generally 12-27) in samples from areas adjacent to the channels, and lower (5-11) some 10-20 km away. We interpret the V-I ratio to be an inverse index of the degree of oxidation to which the original peat was exposed. High V-I ratio coal located near the channels probably formed under more anoxic conditions than did the lower V-I ratio coal some distance away from the channels. The low V-I ratio coal probably formed in areas of the peat swamp where the watertable was generally lower than the channel areas. ?? 1986.
Goerlich-Jansson, Vivian C; Müller, Martina S; Groothuis, Ton G G
2013-12-01
Across various animal taxa not only the secondary sex ratio but also the primary sex ratio (at conception) shows significant deviations from the expected equal proportions of sons and daughters. Birds are especially intriguing to study this phenomenon as avian females are the heterogametic sex (ZW); therefore sex determination might be under direct control of the mother. Avian sex ratios vary in relation to environmental or maternal condition, which can also affect the production of maternal steroids that in turn are involved in reproduction and accumulate in the developing follicle before meiosis. As the proximate mechanisms underlying biased primary sex ratio are largely elusive, we explored how, and to what extent, maternal steroid hormones may be involved in affecting primary or secondary sex ratio in clutches of various species of pigeons. First we demonstrated a clear case of seasonal change in sex ratio in first eggs both in the Rock Pigeon (Columba livia) and in a related species, the Wood Pigeon (Columba palumbus), both producing clutches of two eggs. In the Homing Pigeon (Columba livia domestica), domesticated from the Rock Pigeon, testosterone treatment of breeding females induced a clear male bias, while corticosterone induced a female bias in first eggs and we argue that this is in line with sex allocation theory. We next analyzed treatment effects on follicle formation, yolk mass, and yolk hormones, the latter both pre- and post-ovulatory, in order to test a diversity of potential mechanisms related to both primary and secondary sex ratio manipulation. We conclude that maternal plasma hormone levels may affect several pre-ovulatory mechanisms affecting primary sex ratio, whereas egg hormones are probably involved in secondary sex ratio manipulation only.
Di Stefano, G; Celletti, C; Baron, R; Castori, M; Di Franco, M; La Cesa, S; Leone, C; Pepe, A; Cruccu, G; Truini, A; Camerota, F
2016-09-01
Patients with joint hypermobility syndrome/Ehlers-Danlos syndrome, hypermobility type (JHS/EDS-HT) commonly suffer from pain. How this hereditary connective tissue disorder causes pain remains unclear although previous studies suggested it shares similar mechanisms with neuropathic pain and fibromyalgia. In this prospective study seeking information on the mechanisms underlying pain in patients with JHS/EDS-HT, we enrolled 27 consecutive patients with this connective tissue disorder. Patients underwent a detailed clinical examination, including the neuropathic pain questionnaire DN4 and the fibromyalgia rapid screening tool. As quantitative sensory testing methods, we included thermal-pain perceptive thresholds and the wind-up ratio and recorded a standard nerve conduction study to assess non-nociceptive fibres and laser-evoked potentials, assessing nociceptive fibres. Clinical examination and diagnostic tests disclosed no somatosensory nervous system damage. Conversely, most patients suffered from widespread pain, the fibromyalgia rapid screening tool elicited positive findings, and quantitative sensory testing showed lowered cold and heat pain thresholds and an increased wind-up ratio. While the lack of somatosensory nervous system damage is incompatible with neuropathic pain as the mechanism underlying pain in JHS/EDS-HT, the lowered cold and heat pain thresholds and increased wind-up ratio imply that pain in JHS/EDS-HT might arise through central sensitization. Hence, this connective tissue disorder and fibromyalgia share similar pain mechanisms. WHAT DOES THIS STUDY ADD?: In patients with JHS/EDS-HT, the persistent nociceptive input due to joint abnormalities probably triggers central sensitization in the dorsal horn neurons and causes widespread pain. © 2016 European Pain Federation - EFIC®
Regulation of Motivation to Self-Administer Ethanol by mGluR5 in Alcohol-Preferring (P) Rats
Besheer, Joyce; Faccidomo, Sara; Grondin, Julie J. M.; Hodge, Clyde W.
2008-01-01
Background Emerging evidence indicates that Group I metabotropic glutamate receptors (mGluR1 and mGluR5) differentially regulates ethanol self-administration in several rodent behavioral models. The purpose of this work was to further characterize involvement of Group I mGluRs in the reinforcing effects of ethanol using a progressive ratio schedule of reinforcement. Methods Alcohol-preferring (P) rats were trained to self-administer ethanol (15% v/v) versus water on a concurrent schedule of reinforcement, and the effects of the Group I mGluR antagonists were evaluated on progressive ratio performance. The rats were then trained to self-administer sucrose (0.4% w/v) versus water, and the effects of the antagonists were tested on progressive ratio performance. Results The mGluR1 antagonist, 3,4-dihydro-2H-pyrano[2,3]b quinolin-7-yl (cis-4-methoxy-cyclohexyl) methanone (JNJ 16259685; 0 to 1 mg/kg) and the mGluR5 antagonist, 6-methyl-2-(phenylethynyl) pyridine (MPEP; 0 to 10 mg/kg) dose-dependently reduced ethanol break point. In separate locomotor activity assessments, the lowest effective dose of JNJ 16259685 (0.3 mg/kg) produced a motor impairment, whereas the lowest effective dose of MPEP (3 mg/kg) did not. Thus, the reduction in ethanol break point by mGluR1 antagonism was probably a result of a motor impairment. JNJ 16259685 (0.3 mg/kg) and MPEP (10 mg/kg) reduced sucrose break point and produced motor impairments. Thus, the reductions in sucrose break point produced by both Group I antagonists were probably because of nonspecific effects on motor activity. Conclusions Together, these results suggest that glutamate activity at mGluR5 regulates motivation to self-administer ethanol. PMID:18162077
Culley, Deborah J; Flaherty, Devon; Fahey, Margaret C; Rudolph, James L; Javedan, Houman; Huang, Chuan-Chin; Wright, John; Bader, Angela M; Hyman, Bradley T; Blacker, Deborah; Crosby, Gregory
2017-11-01
The American College of Surgeons and the American Geriatrics Society have suggested that preoperative cognitive screening should be performed in older surgical patients. We hypothesized that unrecognized cognitive impairment in patients without a history of dementia is a risk factor for development of postoperative complications. We enrolled 211 patients 65 yr of age or older without a diagnosis of dementia who were scheduled for an elective hip or knee replacement. Patients were cognitively screened preoperatively using the Mini-Cog and demographic, medical, functional, and emotional/social data were gathered using standard instruments or review of the medical record. Outcomes included discharge to place other than home (primary outcome), delirium, in-hospital medical complications, hospital length-of-stay, 30-day emergency room visits, and mortality. Data were analyzed using univariate and multivariate analyses. Fifty of 211 (24%) patients screened positive for probable cognitive impairment (Mini-Cog less than or equal to 2). On age-adjusted multivariate analysis, patients with a Mini-Cog score less than or equal to 2 were more likely to be discharged to a place other than home (67% vs. 34%; odds ratio = 3.88, 95% CI = 1.58 to 9.55), develop postoperative delirium (21% vs. 7%; odds ratio = 4.52, 95% CI = 1.30 to 15.68), and have a longer hospital length of stay (hazard ratio = 0.63, 95% CI = 0.42 to 0.95) compared to those with a Mini-Cog score greater than 2. Many older elective orthopedic surgical patients have probable cognitive impairment preoperatively. Such impairment is associated with development of delirium postoperatively, a longer hospital stay, and lower likelihood of going home upon hospital discharge.
[Pre-test and post-test probabilities. Who cares?].
Steurer, Johann
2009-01-01
The accuracy of a diagnostic test, i.e. abdomen ultrasound in patients with suspected acute appendicitis, is described in the terms of sensitivity and specificity. According to eminent textbooks physicians should use the values of the sensitivity and specificity of a test in their diagnostic reasoning. Physician's estimate, after taking the history, the pretest-probability of the suspected illness, order one or more tests and then calculate the respective posttest-probability. In practice physicians almost never follow this line of thinking. The main reasons are; to estimate concrete illness probabilities is difficult, the values for the sensitivity and specificity of a test are most often not known by physicians and calculations during daily practice are intricate. Helpful for busy physicians are trustworthy expert recommendations which test to apply in which clinical situation.
Information theoretic quantification of diagnostic uncertainty.
Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T
2012-01-01
Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.
Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data
NASA Technical Reports Server (NTRS)
Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li
2012-01-01
To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.
Iodine-xenon studies of Allende inclusions - Eggs and the Pink Angel
NASA Technical Reports Server (NTRS)
Swindle, T. D.; Caffee, M. W.; Hohenberg, C. M.
1988-01-01
The I-Xe systems of six Allende inclusions (five Eggs and the Pink Angel) appear to have been altered by nonnebular secondary processes. Evidence for this includes temperature-ordered variations in the initial I isotopic composition within several objects (with older apparent I-Xe ages associated with higher extraction temperatures) and the absence of primitive I-Xe ages. The span of apparent ages seen in Allende objects (10 Myr or more) is probably too long to reflect any nebular process, so at least some alteration probably occurred ont the parent body. The range in initial (Pu-244)/(U-238) ratios for the Eggs (0.003-0.014) includes the current best estimates of the bulk solar system value (0.004-0.007). For Egg 3, the Pu/U ratio varies by a factor of two between extractions, probably the result of fractionation of Pu from U among different phases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindle, T.D.; Caffee, M.W.; Hohenberg, C.M.
The iodine-xenon systems of six Allende inclusions (five Eggs and the Pink Angel) appear to have been altered by non-nebular secondary processes. Evidence for this includes temperature-ordered variations in the initial I isotopic composition within several objects (with older apparent I-Xe ages associated with higher extraction temperatures) and the absence of primitive I-Xe ages. The span of apparent ages seen in Allende objects (10 Ma or more) is probably too long to reflect any nebular process, so at least some alteration probably occurred on the parent body. The range in initial {sup 244}Pu/{sup 238}U ratios for the Eggs (3-14 {times}more » 10{sup minus 3}) includes the current best estimates of the bulk solar system value (4-7 {times} 10{sup minus 3}). For Egg 3, the Pu/U ratio varies by a factor of two between extractions, probably the result of fractionation of Pu from U among different phases.« less
Net present value probability distributions from decline curve reserves estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, D.E.; Huffman, C.H.; Thompson, R.S.
1995-12-31
This paper demonstrates how reserves probability distributions can be used to develop net present value (NPV) distributions. NPV probability distributions were developed from the rate and reserves distributions presented in SPE 28333. This real data study used practicing engineer`s evaluations of production histories. Two approaches were examined to quantify portfolio risk. The first approach, the NPV Relative Risk Plot, compares the mean NPV with the NPV relative risk ratio for the portfolio. The relative risk ratio is the NPV standard deviation (a) divided the mean ({mu}) NPV. The second approach, a Risk - Return Plot, is a plot of themore » {mu} discounted cash flow rate of return (DCFROR) versus the {sigma} for the DCFROR distribution. This plot provides a risk-return relationship for comparing various portfolios. These methods may help evaluate property acquisition and divestiture alternatives and assess the relative risk of a suite of wells or fields for bank loans.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brame, Ryan S.; Zaider, Marco; Zakian, Kristen L.
2009-05-01
Purpose: To quantify, as a function of average magnetic resonance spectroscopy (MRS) score and tumor volume, the probability that a cancer-suspected lesion has an elevated Gleason grade. Methods and Materials: The data consist of MRS imaging ratios R stratified by patient, lesion (contiguous abnormal voxels), voxels, biopsy and pathologic Gleason grade, and lesion volume. The data were analyzed using a logistic model. Results: For both low and high Gleason score biopsy lesions, the probability of pathologic Gleason score {>=}4+3 increases with lesion volume. At low values of R a lesion volume of at least 15-20 voxels is needed to reachmore » a probability of success of 80%; the biopsy result helps reduce the prediction uncertainty. At larger MRS ratios (R > 6) the biopsy result becomes essentially uninformative once the lesion volume is >12 voxels. With the exception of low values of R, for lesions with low Gleason score at biopsy, the MRS ratios serve primarily as a selection tool for assessing lesion volumes. Conclusions: In patients with biopsy Gleason score {>=}4+3, high MRS imaging tumor volume and (creatine + choline)/citrate ratio may justify the initiation of voxel-specific dose escalation. This is an example of biologically motivated focal treatment for which intensity-modulated radiotherapy and especially brachytherapy are ideally suited.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Steven H., E-mail: SHLin@mdanderson.org; Wang Lu; Myles, Bevan
2012-12-01
Purpose: Although 3-dimensional conformal radiotherapy (3D-CRT) is the worldwide standard for the treatment of esophageal cancer, intensity modulated radiotherapy (IMRT) improves dose conformality and reduces the radiation exposure to normal tissues. We hypothesized that the dosimetric advantages of IMRT should translate to substantive benefits in clinical outcomes compared with 3D-CRT. Methods and Materials: An analysis was performed of 676 nonrandomized patients (3D-CRT, n=413; IMRT, n=263) with stage Ib-IVa (American Joint Committee on Cancer 2002) esophageal cancers treated with chemoradiotherapy at a single institution from 1998-2008. An inverse probability of treatment weighting and inclusion of propensity score (treatment probability) as amore » covariate were used to compare overall survival time, interval to local failure, and interval to distant metastasis, while accounting for the effects of other clinically relevant covariates. The propensity scores were estimated using logistic regression analysis. Results: A fitted multivariate inverse probability weighted-adjusted Cox model showed that the overall survival time was significantly associated with several well-known prognostic factors, along with the treatment modality (IMRT vs 3D-CRT, hazard ratio 0.72, P<.001). Compared with IMRT, 3D-CRT patients had a significantly greater risk of dying (72.6% vs 52.9%, inverse probability of treatment weighting, log-rank test, P<.0001) and of locoregional recurrence (P=.0038). No difference was seen in cancer-specific mortality (Gray's test, P=.86) or distant metastasis (P=.99) between the 2 groups. An increased cumulative incidence of cardiac death was seen in the 3D-CRT group (P=.049), but most deaths were undocumented (5-year estimate, 11.7% in 3D-CRT vs 5.4% in IMRT group, Gray's test, P=.0029). Conclusions: Overall survival, locoregional control, and noncancer-related death were significantly better after IMRT than after 3D-CRT. Although these results need confirmation, IMRT should be considered for the treatment of esophageal cancer.« less
Thoma, Achilleas; Veltri, Karen; Khuthaila, Dana; Rockwell, Gloria; Duku, Eric
2004-05-01
This study compared the deep inferior epigastric perforator (DIEP) flap and the free transverse rectus abdominis myocutaneous (TRAM) flap in postmastectomy reconstruction using a cost-effectiveness analysis. A decision analytic model was used. Medical costs associated with the two techniques were estimated from the Ontario Ministry of Health Schedule of Benefits for 2002. Hospital costs were obtained from St. Joseph's Healthcare, a university teaching hospital in Hamilton, Ontario, Canada. The utilities of clinically important health states related to breast reconstruction were obtained from 32 "experts" across Canada and converted into quality-adjusted life years. The probabilities of these various clinically important health states being associated with the DIEP and free TRAM flaps were obtained after a thorough review of the literature. The DIEP flap was more costly than the free TRAM flap ($7026.47 versus $6508.29), but it provided more quality-adjusted life years than the free TRAM flap (28.88 years versus 28.53 years). The baseline incremental cost-utility ratio was $1464.30 per quality-adjusted life year, favoring adoption of the DIEP flap. Sensitivity analyses were performed by assuming that the probabilities of occurrence of hernia, abdominal bulging, total flap loss, operating room time, and hospital stay were identical with the DIEP and free TRAM techniques. By assuming that the probability of postoperative hernia for the DIEP flap increased from 0.008 to 0.054 (same as for TRAM flap), the incremental cost-utility ratio changed to $1435.00 per quality-adjusted life year. A sensitivity analysis was performed for the complication of hernia because the DIEP flap allegedly diminishes this complication. Increasing the probability of abdominal bulge from 0.041 to 0.103 for the DIEP flap changed the ratio to $2731.78 per quality-adjusted life year. When the probability of total flap failure was increased from 0.014 to 0.016, the ratio changed to $1384.01 per quality-adjusted life year. When the time in the operating room was assumed to be the same for both flaps, the ratio changed to $4026.57 per quality-adjusted life year. If the hospital stay was assumed to be the same for both flaps, the ratio changed to $1944.30 per quality-adjusted life year. On the basis of the baseline calculation and sensitivity analyses, the DIEP flap remained a cost-effective procedure. Thus, adoption of this new technique for postmastectomy reconstruction is warranted in the Canadian health care system.
Cost-effectiveness of alternative test strategies for the diagnosis of coronary artery disease.
Garber, A M; Solomon, N A
1999-05-04
The appropriate roles for several diagnostic tests for coronary disease are uncertain. To evaluate the cost-effectiveness of alternative approaches to diagnosis of coronary disease. Meta-analysis of the accuracy of alternative diagnostic tests plus decision analysis to assess the health outcomes and costs of alternative diagnostic strategies for patients at intermediate pretest risk for coronary disease. Studies of test accuracy that met inclusion criteria; published information on treatment effectiveness and disease prevalence. Men and women 45, 55, and 65 years of age with a 25% to 75% pretest risk for coronary disease. 30 years. Societal. Diagnostic strategies were initial angiography and initial testing with one of five noninvasive tests--exercise treadmill testing, planar thallium imaging, single-photon emission computed tomography (SPECT), stress echocardiography, and positron emission tomography (PET)--followed by coronary angiography if noninvasive test results were positive. Testing was followed by observation, medical treatment, or revascularization. Life-years, quality-adjusted life-years (QALYs), costs, and costs per QALY. Life expectancy varied little with the initial diagnostic test; for a 55-year-old man, the best-performing test increased life expectancy by 7 more days than the worst-performing test. More sensitive tests increased QALYs more. Echocardiography improved health outcomes and reduced costs relative to stress testing and planar thallium imaging. The incremental cost-effectiveness ratio was $75,000/QALY for SPECT relative to echocardiography and was greater than $640,000 for PET relative to SPECT. Compared with SPECT, immediate angiography had an incremental cost-effectiveness ratio of $94,000/QALY. Qualitative findings varied little with age, sex, pretest probability of disease, or the test indeterminancy rate. Results varied most with sensitivity to severe coronary disease. Echocardiography, SPECT, and immediate angiography are cost-effective alternatives to PET and other diagnostic approaches. Test selection should reflect local variation in test accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, L.A.; Randolph, P.L.
1979-01-01
A paper presented by the Institute of Gas Technology (IGT) at the Third Geopressured-Geothermal Energy Conference hypothesized that the high ratio of produced gas to produced water from the No. 1 sand in the Edna Delcambre No. 1 well was due to free gas trapped in pores by imbibition over geological time. This hypothesis was examined in relation to preliminary test data which reported only average gas to water ratios over the roughly 2-day steps in flow rate. Subsequent public release of detailed test data revealed substantial departures from the previously reported computer simulation results. Also, data now in themore » public domain reveal the existence of a gas cap on the aquifier tested. This paper describes IGT's efforts to match the observed gas/water production with computer simulation. Two models for the occurrence and production of gas in excess of that dissolved in the brine have been used. One model considers the gas to be dispersed in pores by imbibition, and the other model considers the gas as a nearby free gas cap above the aquifier. The studies revealed that the dispersed gas model characteristically gave the wrong shape to plots of gas production on the gas/water ratio plots such that no reasonable match to the flow data could be achieved. The free gas cap model gave a characteristically better shape to the production plots and could provide an approximate fit to the data of the edge of the free gas cap is only about 400 feet from the well.Because the geological structure maps indicate the free gas cap to be several thousand feet away and the computer simulation results match the distance to the nearby Delcambre Nos. 4 and 4A wells, it appears that the source of the excess free gas in the test of the No. 1 sand may be from these nearby wells. The gas source is probably a separate gas zone and is brought into contact with the No. 1 sand via a conduit around the No. 4 well.« less
NASA Astrophysics Data System (ADS)
Tibi, R.; Young, C. J.; Koper, K. D.; Pankow, K. L.
2017-12-01
Seismic event discrimination methods exploit the differing characteristics—in terms of amplitude and/or frequency content—of the generated seismic phases among the event types to be classified. Most of the commonly used seismic discrimination methods are designed for regional data recorded at distances of about 200 to 2000 km. Relatively little attention has focused on discriminants for local distances (< 200 km), the range at which the smallest events are recorded. Short-period fundamental mode Rayleigh waves (Rg) are commonly observed on seismograms of man-made seismic events, and shallow, naturally occurring tectonic earthquakes recorded at local distances. We leverage the well-known notion that Rg amplitude decreases dramatically with increasing event depth to propose a new depth discriminant based on Rg-to-Sg spectral amplitude ratios. The approach is successfully used to discriminate shallow events from deeper tectonic earthquakes in the Utah region recorded at local distances (< 150 km) by the University of Utah Seismographic Stations (UUSS) regional seismic network. Using Mood's median test, we obtained probabilities of nearly zero that the median Rg-to-Sg spectral amplitude ratios are the same between shallow events on one side (including both shallow tectonic earthquakes and man-made events), and deeper earthquakes on the other side, suggesting that there is a statistically significant difference in the estimated Rg-to-Sg ratios between the two populations. We also observed consistent disparities between the different types of shallow events (e.g., explosions vs. mining-induced events), implying that it may be possible to separate the sub-populations that make up this group. This suggests that using local distance Rg-to-Sg spectral amplitude ratios one can not only discriminate shallow from deeper events, but may also be able to discriminate different populations of shallow events. We also experimented with Pg-to-Sg amplitude ratios in multi-frequency linear discriminant functions to classify man-made events and tectonic earthquakes in Utah. Initial results are very promising, showing probabilities of misclassification of only 2.4-14.3%.
Hung, Kristin J; Awtrey, Christopher S; Tsai, Alexander C
2014-04-01
To estimate the association between urinary incontinence (UI) and probable depression, work disability, and workforce exit. The analytic sample consisted of 4,511 women enrolled in the population-based Health and Retirement Study cohort. The analysis baseline was 1996, the year that questions about UI were added to the survey instrument, and at which time study participants were 54-65 years of age. Women were followed-up with biennial interviews until 2010-2011. Outcomes of interest were onset of probable depression, work disability, and workforce exit. Urinary incontinence was specified in different ways based on questions about experience and frequency of urine loss. We fit Cox proportional hazards regression models to the data, adjusting the estimates for baseline sociodemographic and health status variables previously found to confound the association between UI and the outcomes of interest. At baseline, 727 participants (survey-weighted prevalence, 16.6%; 95% confidence interval [CI] 15.4-18.0) reported any UI, of which 212 (survey-weighted prevalence, 29.2%; 95% CI 25.4-33.3) reported urine loss on more than 15 days in the past month; and 1,052 participants were categorized as having probable depression (survey-weighted prevalence, 21.6%; 95% CI 19.8-23.6). Urinary incontinence was associated with increased risks for probable depression (adjusted hazard ratio, 1.43; 95% CI 1.27-1.62) and work disability (adjusted hazard ratio, 1.21; 95% CI 1.01-1.45), but not workforce exit (adjusted hazard ratio, 1.06; 95% CI 0.93-1.21). In a population-based cohort of women between ages 54 and 65 years, UI was associated with increased risks for probable depression and work disability. Improved diagnosis and management of UI may yield significant economic and psychosocial benefits.
Uceda, Mónica; Ziegler, Otto; Lindo, Felipe; Herrera-Pérez, Eder
2013-01-01
Background. Asthma and allergic rhinitis are highly prevalent conditions that cause major illness worldwide. This study aimed to assess the association between allergic rhinitis and asthma control in Peruvian school children. Methods. A cross-sectional study was conducted among 256 children with asthma recruited in 5 schools from Lima and Callao cities. The outcome was asthma control assessed by the asthma control test. A score test for trend of odds was used to evaluate the association between allergic rhinitis severity and the prevalence of inadequate asthma control. A generalized linear regression model was used to estimate the adjusted prevalence ratios of inadequate asthma control. Results. Allergic rhinitis was present in 66.4% of the population with asthma. The trend analysis showed a positive association between allergic rhinitis and the probability of inadequate asthma control (P < 0.001). It was associated with an increased prevalence of inadequate asthma control, with adjusted prevalence ratios of 1.53 (95% confidence interval: 1.19−1.98). Conclusion. This study indicates that allergic rhinitis is associated with an inadequate level of asthma control, giving support to the recommendation of evaluating rhinitis to improve asthma control in children. PMID:23984414
Reske, Kimberly A.; Hink, Tiffany; Dubberke, Erik R.
2016-01-01
ABSTRACT The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. PMID:27927930
Probability, geometry, and dynamics in the toss of a thick coin
NASA Astrophysics Data System (ADS)
Yong, Ee Hou; Mahadevan, L.
2011-12-01
When a thick cylindrical coin is tossed in the air and lands without bouncing on an inelastic substrate, it ends up on its face or its side. We account for the rigid body dynamics of spin and precession and calculate the probability distribution of heads, tails, and sides for a thick coin as a function of its dimensions and the distribution of its initial conditions. Our theory yields a simple expression for the aspect ratio of homogeneous coins with a prescribed frequency of heads or tails compared to sides, which we validate using data from the results of tossing coins of different aspect ratios.
The potential economic value of screening hospital admissions for Clostridium difficile.
Bartsch, S M; Curry, S R; Harrison, L H; Lee, B Y
2012-11-01
Asymptomatic Clostridium difficile carriage has a prevalence reported as high as 51-85 %; with up to 84 % of incident hospital-acquired infections linked to carriers. Accurately identifying carriers may limit the spread of Clostridium difficile. Since new technology adoption depends heavily on its economic value, we developed an analytic simulation model to determine the cost-effectiveness screening hospital admissions for Clostridium difficile from the hospital and third party payer perspectives. Isolation precautions were applied to patients testing positive, preventing transmission. Sensitivity analyses varied Clostridium difficile colonization rate, infection probability among secondary cases, contact isolation compliance, and screening cost. Screening was cost-effective (i.e., incremental cost-effectiveness ratio [ICER] ≤ $50,000/QALY) for every scenario tested; all ICER values were ≤ $256/QALY. Screening was economically dominant (i.e., saved costs and provided health benefits) with a ≥10.3 % colonization rate and ≥5.88 % infection probability when contact isolation compliance was ≥25 % (hospital perspective). Under some conditions screening led to cost savings per case averted (range, $53-272). Clostridium difficile screening, coupled with isolation precautions, may be a cost-effective intervention to hospitals and third party payers, based on prevalence. Limiting Clostridium difficile transmission can reduce the number of infections, thereby reducing its economic burden to the healthcare system.
Jones, Alvin
2016-05-01
This research examined cutoff scores for the Effort Index (EI), an embedded measure of performance validity, for the Repeatable Battery for the Assessment of Neuropsychological Status. EI cutoffs were explored for an active-duty military sample composed mostly of patients with traumatic brain injury. Four psychometrically defined malingering groups including a definite malingering, probable to definite malingering, probable malingering, and a combined group were formed based on the number of validity tests failed. Excellent specificities (0.97 or greater) were found for all cutoffs examined (EI ≥ 1 to EI ≥ 3). Excellent sensitivities (0.80 to 0.89) were also found for the definite malingering group. Sensitivities were 0.49 or below for the other groups. Positive and negative predictive values and likelihood ratios indicated that the cutoffs for EI were much stronger for ruling-in than ruling-out malingering. Analyses indicated the validity tests used to form the malingering groups were uncorrelated, which serves to enhance the validity of the formation of the malingering groups. Cutoffs were similar to other research using samples composed predominantly of head-injured individuals. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.
A potential risk of overestimating apparent diffusion coefficient in parotid glands.
Liu, Yi-Jui; Lee, Yi-Hsiung; Chang, Hing-Chiu; Huang, Teng-Yi; Chiu, Hui-Chu; Wang, Chih-Wei; Chiou, Ta-Wei; Hsu, Kang; Juan, Chun-Jung; Huang, Guo-Shu; Hsu, Hsian-He
2015-01-01
To investigate transient signal loss on diffusion weighted images (DWI) and overestimation of apparent diffusion coefficient (ADC) in parotid glands using single shot echoplanar DWI (EPDWI). This study enrolled 6 healthy subjects and 7 patients receiving radiotherapy. All participants received dynamic EPDWI with a total of 8 repetitions. Imaging quality of DWI was evaluated. Probability of severe overestimation of ADC (soADC), defined by an ADC ratio more than 1.2, was calculated. Error on T2WI, DWI, and ADC was computed. Statistical analysis included paired Student t testing and Mann-Whitney U test. A P value less than 0.05 was considered statistically significant. Transient signal loss was visually detected on some excitations of DWI but not on T2WI or mean DWI. soADC occurred randomly among 8 excitations and 3 directions of diffusion encoding gradients. Probability of soADC was significantly higher in radiotherapy group (42.86%) than in healthy group (24.39%). The mean error percentage decreased as the number of excitations increased on all images, and, it was smallest on T2WI, followed by DWI and ADC in an increasing order. Transient signal loss on DWI was successfully detected by dynamic EPDWI. The signal loss on DWI and overestimation of ADC could be partially remedied by increasing the number of excitations.
The Potential Economic Value of Screening Hospital Admissions for Clostridium difficile
Bartsch, Sarah M.; Curry, Scott R.; Harrison, Lee H.; Lee, Bruce Y.
2012-01-01
Purpose Asymptomatic Clostridium difficile carriage has a prevalence reported as high as 51% to 85%; with up to 84% of incident hospital-acquired infections linked to carriers. Accurately identifying carriers may limit the spread of Clostridium difficile. Methods Since new technology adoption depends heavily on its economic value, we developed a analytic simulation model to determine the cost-effectiveness screening hospital admissions for Clostridium difficile from the hospital and third party payer perspectives. Isolation precautions were applied to patients testing positive, preventing transmission. Sensitivity analyses varied Clostridium difficile colonization rate, infection probability among secondary cases, contact isolation compliance, and screening cost. Results Screening was cost-effective [i.e., incremental cost-effectiveness ratio (ICER) ≤$50,000/QALY] for every scenario tested; all ICER values ≤$256/QALY. Screening was economically dominant (i.e., saved costs and provided health benefits) with a ≥10.3% colonization rate and ≥5.88% infection probability when contact isolation compliance was ≥25% (hospital perspective). Under some conditions screening led to cost-savings per case averted (range: $53 to $272). Conclusion Clostridium difficile screening, coupled with isolation precautions, may be a cost-effective intervention to hospitals and third party payers, based on prevalence. Limiting Clostridium difficile transmission can reduce the number of infections, thereby reducing its economic burden to the healthcare system. PMID:22752150
Chan, Brian T; Pradeep, Amrose; Prasad, Lakshmi; Murugesan, Vinothini; Chandrasekaran, Ezhilarasi; Kumarasamy, Nagalingeswaran; Mayer, Kenneth H; Tsai, Alexander C
2017-01-01
Background In India, which has the third largest HIV epidemic in the world, depression and HIV–related stigma may contribute to high rates of poor HIV–related outcomes such as loss to care and lack of virologic suppression. Methods We analyzed data from a large HIV treatment center in southern India to estimate the burden of depressive symptoms and internalized stigma among Indian people living with HIV (PLHIV) entering into HIV care and to test the hypothesis that probable depression was associated with internalized stigma. We fitted modified Poisson regression models, adjusted for sociodemographic variables, with probable depression (PHQ–9 score ≥10 or recent suicidal thoughts) as the outcome variable and the Internalized AIDS–Related Stigma Scale (IARSS) score as the explanatory variable. Findings 521 persons (304 men and 217 women) entering into HIV care between January 2015 and May 2016 were included in the analyses. The prevalence of probable depression was 10% and the mean IARSS score was 2.4 (out of 6), with 82% of participants endorsing at least one item on the IARSS. There was a nearly two times higher risk of probable depression for every additional point on the IARSS score (Adjusted Risk Ratio: 1.83; 95% confidence interval, 1.56–2.14). Conclusions Depression and internalized stigma are highly correlated among PLHIV entering into HIV care in southern India and may provide targets for policymakers seeking to improve HIV–related outcomes in India. PMID:29302315
PGS-FISH in reproductive medicine and perspective directions for improvement: a systematic review.
Zamora, Sandra; Clavero, Ana; Gonzalvo, M Carmen; de Dios Luna Del Castillo, Juan; Roldán-Nofuentes, Jose Antonio; Mozas, Juan; Castilla, Jose Antonio
2011-08-01
Embryo selection can be carried out via morphological criteria or by using genetic studies based on Preimplantation Genetic Screening. In the present study, we evaluate the clinical validity of Preimplantation Genetic Screening with fluorescence in situ hybridization (PGS-FISH) compared with morphological embryo criteria. A systematic review was made of the bibliography, with the following goals: firstly, to determine the prevalence of embryo chromosome alteration in clinical situations in which the PGS-FISH technique has been used; secondly, to calculate the statistics of diagnostic efficiency (negative Likelihood Ratio), using 2 × 2 tables, derived from PGS-FISH. The results obtained were compared with those obtained from embryo morphology. We calculated the probability of transferring at least one chromosome-normal embryo when it was selected using either morphological criteria or PGS-FISH, and considered what diagnostic performance should be expected of an embryo selection test with respect to achieving greater clinical validity than that obtained from embryo morphology. After an embryo morphology selection that produced a negative result (normal morphology), the likelihood of embryo aneuploidies was found to range from a pre-test value of 65% (prevalence of embryo chromosome alteration registered in all the study groups) to a post-test value of 55% (Confidence interval: 50-61), while after PGS-FISH with a negative result (euploid), the post-test probability was 42% (Confidence interval: 35-49) (p < 0.05). The probability of transferring at least one euploid embryo was the same whether 3 embryos were selected according to morphological criteria or whether 2, selected by PGS-FISH, were transferred. Any embryo selection test, if it is to provide greater clinical validity than embryo morphology, must present a LR-value of 0.40 (Confidence interval: 0.32-0.51) in single embryo transfer, and 0.06 (CI: 0.05-0.07) in double embryo transfer. With currently available technology, and taking into account the number of embryos to be transferred, the clinical validity of PGS-FISH, although superior to that of morphological criteria, does not appear to be clinically relevant.
Match probabilities in a finite, subdivided population
Malaspinas, Anna-Sapfo; Slatkin, Montgomery; Song, Yun S.
2011-01-01
We generalize a recently introduced graphical framework to compute the probability that haplotypes or genotypes of two individuals drawn from a finite, subdivided population match. As in the previous work, we assume an infinite-alleles model. We focus on the case of a population divided into two subpopulations, but the underlying framework can be applied to a general model of population subdivision. We examine the effect of population subdivision on the match probabilities and the accuracy of the product rule which approximates multi-locus match probabilities as a product of one-locus match probabilities. We quantify the deviation from predictions of the product rule by R, the ratio of the multi-locus match probability to the product of the one-locus match probabilities.We carry out the computation for two loci and find that ignoring subdivision can lead to underestimation of the match probabilities if the population under consideration actually has subdivision structure and the individuals originate from the same subpopulation. On the other hand, under a given model of population subdivision, we find that the ratio R for two loci is only slightly greater than 1 for a large range of symmetric and asymmetric migration rates. Keeping in mind that the infinite-alleles model is not the appropriate mutation model for STR loci, we conclude that, for two loci and biologically reasonable parameter values, population subdivision may lead to results that disfavor innocent suspects because of an increase in identity-by-descent in finite populations. On the other hand, for the same range of parameters, population subdivision does not lead to a substantial increase in linkage disequilibrium between loci. Those results are consistent with established practice. PMID:21266180
REGULATION OF GEOGRAPHIC VARIABILITY IN HAPLOID:DIPLOD RATIOS OF BIPHASIC SEAWEED LIFE CYCLES(1).
da Silva Vieira, Vasco Manuel Nobre de Carvalho; Santos, Rui Orlando Pimenta
2012-08-01
The relative abundance of haploid and diploid individuals (H:D) in isomorphic marine algal biphasic cycles varies spatially, but only if vital rates of haploid and diploid phases vary differently with environmental conditions (i.e. conditional differentiation between phases). Vital rates of isomorphic phases in particular environments may be determined by subtle morphological or physiological differences. Herein, we test numerically how geographic variability in H:D is regulated by conditional differentiation between isomorphic life phases and the type of life strategy of populations (i.e. life cycles dominated by reproduction, survival or growth). Simulation conditions were selected using available data on H:D spatial variability in seaweeds. Conditional differentiation between ploidy phases had a small effect on the H:D variability for species with life strategies that invest either in fertility or in growth. Conversely, species with life strategies that invest mainly in survival, exhibited high variability in H:D through a conditional differentiation in stasis (the probability of staying in the same size class), breakage (the probability of changing to a smaller size class) or growth (the probability of changing to a bigger size class). These results were consistent with observed geographic variability in H:D of natural marine algae populations. © 2012 Phycological Society of America.
Calculating Absolute Transition Probabilities for Deformed Nuclei in the Rare-Earth Region
NASA Astrophysics Data System (ADS)
Stratman, Anne; Casarella, Clark; Aprahamian, Ani
2017-09-01
Absolute transition probabilities are the cornerstone of understanding nuclear structure physics in comparison to nuclear models. We have developed a code to calculate absolute transition probabilities from measured lifetimes, using a Python script and a Mathematica notebook. Both of these methods take pertinent quantities such as the lifetime of a given state, the energy and intensity of the emitted gamma ray, and the multipolarities of the transitions to calculate the appropriate B(E1), B(E2), B(M1) or in general, any B(σλ) values. The program allows for the inclusion of mixing ratios of different multipolarities and the electron conversion of gamma-rays to correct for their intensities, and yields results in absolute units or results normalized to Weisskopf units. The code has been tested against available data in a wide range of nuclei from the rare earth region (28 in total), including 146-154Sm, 154-160Gd, 158-164Dy, 162-170Er, 168-176Yb, and 174-182Hf. It will be available from the Notre Dame Nuclear Science Laboratory webpage for use by the community. This work was supported by the University of Notre Dame College of Science, and by the National Science Foundation, under Contract PHY-1419765.
Schmidtmann, Gunnar; Jennings, Ben J; Bell, Jason; Kingdom, Frederick A A
2015-01-01
Previous studies investigating signal integration in circular Glass patterns have concluded that the information in these patterns is linearly summed across the entire display for detection. Here we test whether an alternative form of summation, probability summation (PS), modeled under the assumptions of Signal Detection Theory (SDT), can be rejected as a model of Glass pattern detection. PS under SDT alone predicts that the exponent β of the Quick- (or Weibull-) fitted psychometric function should decrease with increasing signal area. We measured spatial integration in circular, radial, spiral, and parallel Glass patterns, as well as comparable patterns composed of Gabors instead of dot pairs. We measured the signal-to-noise ratio required for detection as a function of the size of the area containing signal, with the remaining area containing dot-pair or Gabor-orientation noise. Contrary to some previous studies, we found that the strength of summation never reached values close to linear summation for any stimuli. More importantly, the exponent β systematically decreased with signal area, as predicted by PS under SDT. We applied a model for PS under SDT and found that it gave a good account of the data. We conclude that probability summation is the most likely basis for the detection of circular, radial, spiral, and parallel orientation-defined textures.
Cost-effectiveness of external cephalic version for term breech presentation.
Tan, Jonathan M; Macario, Alex; Carvalho, Brendan; Druzin, Maurice L; El-Sayed, Yasser Y
2010-01-21
External cephalic version (ECV) is recommended by the American College of Obstetricians and Gynecologists to convert a breech fetus to vertex position and reduce the need for cesarean delivery. The goal of this study was to determine the incremental cost-effectiveness ratio, from society's perspective, of ECV compared to scheduled cesarean for term breech presentation. A computer-based decision model (TreeAge Pro 2008, Tree Age Software, Inc.) was developed for a hypothetical base case parturient presenting with a term singleton breech fetus with no contraindications for vaginal delivery. The model incorporated actual hospital costs (e.g., $8,023 for cesarean and $5,581 for vaginal delivery), utilities to quantify health-related quality of life, and probabilities based on analysis of published literature of successful ECV trial, spontaneous reversion, mode of delivery, and need for unanticipated emergency cesarean delivery. The primary endpoint was the incremental cost-effectiveness ratio in dollars per quality-adjusted year of life gained. A threshold of $50,000 per quality-adjusted life-years (QALY) was used to determine cost-effectiveness. The incremental cost-effectiveness of ECV, assuming a baseline 58% success rate, equaled $7,900/QALY. If the estimated probability of successful ECV is less than 32%, then ECV costs more to society and has poorer QALYs for the patient. However, as the probability of successful ECV was between 32% and 63%, ECV cost more than cesarean delivery but with greater associated QALY such that the cost-effectiveness ratio was less than $50,000/QALY. If the probability of successful ECV was greater than 63%, the computer modeling indicated that a trial of ECV is less costly and with better QALYs than a scheduled cesarean. The cost-effectiveness of a trial of ECV is most sensitive to its probability of success, and not to the probabilities of a cesarean after ECV, spontaneous reversion to breech, successful second ECV trial, or adverse outcome from emergency cesarean. From society's perspective, ECV trial is cost-effective when compared to a scheduled cesarean for breech presentation provided the probability of successful ECV is > 32%. Improved algorithms are needed to more precisely estimate the likelihood that a patient will have a successful ECV.
Riley, Richard D; Ahmed, Ikhlaaq; Debray, Thomas P A; Willis, Brian H; Noordzij, J Pieter; Higgins, Julian P T; Deeks, Jonathan J
2015-06-15
Following a meta-analysis of test accuracy studies, the translation of summary results into clinical practice is potentially problematic. The sensitivity, specificity and positive (PPV) and negative (NPV) predictive values of a test may differ substantially from the average meta-analysis findings, because of heterogeneity. Clinicians thus need more guidance: given the meta-analysis, is a test likely to be useful in new populations, and if so, how should test results inform the probability of existing disease (for a diagnostic test) or future adverse outcome (for a prognostic test)? We propose ways to address this. Firstly, following a meta-analysis, we suggest deriving prediction intervals and probability statements about the potential accuracy of a test in a new population. Secondly, we suggest strategies on how clinicians should derive post-test probabilities (PPV and NPV) in a new population based on existing meta-analysis results and propose a cross-validation approach for examining and comparing their calibration performance. Application is made to two clinical examples. In the first example, the joint probability that both sensitivity and specificity will be >80% in a new population is just 0.19, because of a low sensitivity. However, the summary PPV of 0.97 is high and calibrates well in new populations, with a probability of 0.78 that the true PPV will be at least 0.95. In the second example, post-test probabilities calibrate better when tailored to the prevalence in the new population, with cross-validation revealing a probability of 0.97 that the observed NPV will be within 10% of the predicted NPV. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Nöthling, Johan O; Du Toit, Johannes S; Myburgh, Jan G
2014-09-19
This study was done to determine whether blesbok (Damaliscus pygargus phillipsi) from the Krugersdorp Game Reserve (KGR) in Gauteng Province, South Africa have higher concentrations of (238)U and higher (206)Pb/(204)Pb and (207)Pb/(204)Pb ratios in their bone ash than blesbok from a nearby control reserve that is not exposed to mine water and has no outcrops of uraniferous rocks. Eight blesbok females from the KGR and seven from the control site, all killed with a brain shot, were used. A Thermo X-series 2 quadrupole ICPMS was used to measure the concentrations of (238)U and lead and a Nu Instruments NuPlasma HR MC-ICP-MS to measure the lead isotope ratios in the tibial ash from each animal. KGR blesbok had higher mean concentrations of (238)U (P = 0.02) and ratios of (206)Pb/(204)Pb and (207)Pb/(204)Pb (P < 0.00001) than the control blesbok. The probability of rejecting the false null hypothesis of no difference in the (206)Pb/(204)Pb or (207)Pb/(204)Pb ratios between KGR and control reserve animals (the power of the test) was 0.999. The blesbok from the KGR accumulated contaminants from an uraniferous environment. The (206)Pb/(204)Pb and (207)Pb/(204)Pb ratios in tibial ash proved effective in confirming accumulation of contaminants from uraniferous rocks.
2014-01-01
Background Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury. Methods Study Design: Systematic review. The review protocol was registered through PROSPERO (CRD42012002069). Electronic databases (PubMed, MEDLINE, EMBASE, CINAHL) were searched up to 19th of June 2013 to identify diagnostic studies comparing the accuracy of clinical tests for ACL injury to an acceptable reference standard (arthroscopy, arthrotomy, or MRI). Risk of bias was appraised using the QUADAS-2 checklist. Index test accuracy was evaluated using a descriptive analysis of paired likelihood ratios and displayed as forest plots. Results A total of 285 full-text articles were assessed for eligibility, from which 14 studies were included in this review. Included studies were deemed to be clinically and statistically heterogeneous, so a meta-analysis was not performed. Nine clinical tests from the history (popping sound at time of injury, giving way, effusion, pain, ability to continue activity) and four from physical examination (anterior draw test, Lachman’s test, prone Lachman’s test and pivot shift test) were investigated for diagnostic accuracy. Inspection of positive and negative likelihood ratios indicated that none of the individual tests provide useful diagnostic information in a clinical setting. Most studies were at risk of bias and reported imprecise estimates of diagnostic accuracy. Conclusion Despite being widely used and accepted in clinical practice, the results of individual history items or physical tests do not meaningfully change the probability of ACL injury. In contrast combinations of tests have higher diagnostic accuracy; however the most accurate combination of clinical tests remains an area for future research. Clinical relevance Clinicians should be aware of the limitations associated with the use of clinical tests for diagnosis of ACL injury. PMID:25187877
Swain, Michael S; Henschke, Nicholas; Kamper, Steven J; Downie, Aron S; Koes, Bart W; Maher, Chris G
2014-01-01
Numerous clinical tests are used in the diagnosis of anterior cruciate ligament (ACL) injury but their accuracy is unclear. The purpose of this study is to evaluate the diagnostic accuracy of clinical tests for the diagnosis of ACL injury. Systematic review. The review protocol was registered through PROSPERO (CRD42012002069). Electronic databases (PubMed, MEDLINE, EMBASE, CINAHL) were searched up to 19th of June 2013 to identify diagnostic studies comparing the accuracy of clinical tests for ACL injury to an acceptable reference standard (arthroscopy, arthrotomy, or MRI). Risk of bias was appraised using the QUADAS-2 checklist. Index test accuracy was evaluated using a descriptive analysis of paired likelihood ratios and displayed as forest plots. A total of 285 full-text articles were assessed for eligibility, from which 14 studies were included in this review. Included studies were deemed to be clinically and statistically heterogeneous, so a meta-analysis was not performed. Nine clinical tests from the history (popping sound at time of injury, giving way, effusion, pain, ability to continue activity) and four from physical examination (anterior draw test, Lachman's test, prone Lachman's test and pivot shift test) were investigated for diagnostic accuracy. Inspection of positive and negative likelihood ratios indicated that none of the individual tests provide useful diagnostic information in a clinical setting. Most studies were at risk of bias and reported imprecise estimates of diagnostic accuracy. Despite being widely used and accepted in clinical practice, the results of individual history items or physical tests do not meaningfully change the probability of ACL injury. In contrast combinations of tests have higher diagnostic accuracy; however the most accurate combination of clinical tests remains an area for future research. Clinicians should be aware of the limitations associated with the use of clinical tests for diagnosis of ACL injury.
Limited family structure and BRCA gene mutation status in single cases of breast cancer.
Weitzel, Jeffrey N; Lagos, Veronica I; Cullinane, Carey A; Gambol, Patricia J; Culver, Julie O; Blazer, Kathleen R; Palomares, Melanie R; Lowstuter, Katrina J; MacDonald, Deborah J
2007-06-20
An autosomal dominant pattern of hereditary breast cancer may be masked by small family size or transmission through males given sex-limited expression. To determine if BRCA gene mutations are more prevalent among single cases of early onset breast cancer in families with limited vs adequate family structure than would be predicted by currently available probability models. A total of 1543 women seen at US high-risk clinics for genetic cancer risk assessment and BRCA gene testing were enrolled in a prospective registry study between April 1997 and February 2007. Three hundred six of these women had breast cancer before age 50 years and no first- or second-degree relatives with breast or ovarian cancers. The main outcome measure was whether family structure, assessed from multigenerational pedigrees, predicts BRCA gene mutation status. Limited family structure was defined as fewer than 2 first- or second-degree female relatives surviving beyond age 45 years in either lineage. Family structure effect and mutation probability by the Couch, Myriad, and BRCAPRO models were assessed with stepwise multiple logistic regression. Model sensitivity and specificity were determined and receiver operating characteristic curves were generated. Family structure was limited in 153 cases (50%). BRCA gene mutations were detected in 13.7% of participants with limited vs 5.2% with adequate family structure. Family structure was a significant predictor of mutation status (odds ratio, 2.8; 95% confidence interval, 1.19-6.73; P = .02). Although none of the models performed well, receiver operating characteristic analysis indicated that modification of BRCAPRO output by a corrective probability index accounting for family structure was the most accurate BRCA gene mutation status predictor (area under the curve, 0.72; 95% confidence interval, 0.63-0.81; P<.001) for single cases of breast cancer. Family structure can affect the accuracy of mutation probability models. Genetic testing guidelines may need to be more inclusive for single cases of breast cancer when the family structure is limited and probability models need to be recreated using limited family history as an actual variable.
Network Algorithms for Detection of Radiation Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Brooks, Richard R; Wu, Qishi
In support of national defense, Domestic Nuclear Detection Office s (DNDO) Intelligent Radiation Sensor Systems (IRSS) program supported the development of networks of radiation counters for detecting, localizing and identifying low-level, hazardous radiation sources. Industry teams developed the first generation of such networks with tens of counters, and demonstrated several of their capabilities in indoor and outdoor characterization tests. Subsequently, these test measurements have been used in algorithm replays using various sub-networks of counters. Test measurements combined with algorithm outputs are used to extract Key Measurements and Benchmark (KMB) datasets. We present two selective analyses of these datasets: (a) amore » notional border monitoring scenario that highlights the benefits of a network of counters compared to individual detectors, and (b) new insights into the Sequential Probability Ratio Test (SPRT) detection method, which lead to its adaptations for improved detection. Using KMB datasets from an outdoor test, we construct a notional border monitoring scenario, wherein twelve 2 *2 NaI detectors are deployed on the periphery of 21*21meter square region. A Cs-137 (175 uCi) source is moved across this region, starting several meters from outside and finally moving away. The measurements from individual counters and the network were processed using replays of a particle filter algorithm developed under IRSS program. The algorithm outputs from KMB datasets clearly illustrate the benefits of combining measurements from all networked counters: the source was detected before it entered the region, during its trajectory inside, and until it moved several meters away. When individual counters are used for detection, the source was detected for much shorter durations, and sometimes was missed in the interior region. The application of SPRT for detecting radiation sources requires choosing the detection threshold, which in turn requires a source strength estimate, typically specified as a multiplier of the background radiation level. A judicious selection of this source multiplier is essential to achieve optimal detection probability at a specified false alarm rate. Typically, this threshold is chosen from the Receiver Operating Characteristic (ROC) by varying the source multiplier estimate. ROC is expected to have a monotonically increasing profile between the detection probability and false alarm rate. We derived ROCs for multiple indoor tests using KMB datasets, which revealed an unexpected loop shape: as the multiplier increases, detection probability and false alarm rate both increase until a limit, and then both contract. Consequently, two detection probabilities correspond to the same false alarm rate, and the higher is achieved at a lower multiplier, which is the desired operating point. Using the Chebyshev s inequality we analytically confirm this shape. Then, we present two improved network-SPRT methods by (a) using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and (b) applying a composite SPRT derived using measurements from all counters.« less
Contingency bias in probability judgement may arise from ambiguity regarding additional causes.
Mitchell, Chris J; Griffiths, Oren; More, Pranjal; Lovibond, Peter F
2013-09-01
In laboratory contingency learning tasks, people usually give accurate estimates of the degree of contingency between a cue and an outcome. However, if they are asked to estimate the probability of the outcome in the presence of the cue, they tend to be biased by the probability of the outcome in the absence of the cue. This bias is often attributed to an automatic contingency detection mechanism, which is said to act via an excitatory associative link to activate the outcome representation at the time of testing. We conducted 3 experiments to test alternative accounts of contingency bias. Participants were exposed to the same outcome probability in the presence of the cue, but different outcome probabilities in the absence of the cue. Phrasing the test question in terms of frequency rather than probability and clarifying the test instructions reduced but did not eliminate contingency bias. However, removal of ambiguity regarding the presence of additional causes during the test phase did eliminate contingency bias. We conclude that contingency bias may be due to ambiguity in the test question, and therefore it does not require postulation of a separate associative link-based mechanism.
NASA Astrophysics Data System (ADS)
Stuchbery, A. E.; Ryan, C. G.; Bolotin, H. H.; Morrison, I.; Sie, S. H.
1981-07-01
The enhanced transient hyperfine field manifest at the nuclei of swiftly recoiling ions traversing magnetized ferromagnetic materials was utilized to measure the gyromagnetic ratios of the 2 +1, 2 +2 and 4 +1 states in 198Pt by the thin-foil technique. The states of interest were populated by Coulomb excitation using a beam of 220 MeV 58Ni ions. The results obtained were: g(2 +1) = 0.324 ± 0.026; g(2 +2) = 0.34 ± 0.06; g(4 +1) = 0.34 ± 0.06. In addition, these measurements served to discriminate between the otherwise essentially equally probable values previously reported for the E2/M1 ratio of the 2 +2 → 2 +1 transition in 198Pt. We also performed interacting boson approximation (IBA) model-based calculations in the O(6) limit symmetry, with and without inclusion of a small degree of symmetry breaking, and employed the M1 operator in both first- and second-order to obtain M1 selection rules and to calculate gyromagnetic ratios of levels. When O(6) symmetry is broken, there is a predicted departure from constancy of the g-factors which provides a good test of the nuclear wave function. Evaluative comparisons are made between these experimental and predicted g-factors.
Avril, E; Lacroix, S; Vrignaud, B; Moreau-Klein, A; Coste-Burel, M; Launay, E; Gras-Le Guen, C
2016-07-01
We wanted to determine the diagnostic performance of a rapid influenza diagnostic test (RIDT) used bedside in a pediatric emergency department (PED). This was a prospective study over four consecutive winters (2009-2013), comparing the results of a RIDT (QuickVue®) with RT-PCR in children admitted to a PED. Among the 764 children included, we did not observe any significant differences in the diagnostic performance of RIDT except during the H1N1 pandemic. The overall sensitivity of the test was 0.82; the specificity 0.98; the positive and negative likelihood ratios 37.8 and 0.19. The positive and negative post-test probabilities of infection were 98% and 17%. The diagnostic performance was increased for influenza B cases (P = 0.03). RIDTs are suitable for use every winter with few differences in its diagnostic value, except during specific pandemic periods. This test could limit unnecessary complementary exams and guide the prescription of antivirals during influenza epidemic periods in PEDs. Copyright © 2016. Published by Elsevier Inc.
Is dietary diversity a proxy measurement of nutrient adequacy in Iranian elderly women?
Tavakoli, Sogand; Dorosty-Motlagh, Ahmad Reza; Hoshiar-Rad, Anahita; Eshraghian, Mohamad Reza; Sotoudeh, Gity; Azadbakht, Leila; Karimi, Mehrdad; Jalali-Farahani, Sara
2016-10-01
To investigate whether consumption of more diverse diets would increase the probability of nutrients adequacy among elderly women in Tehran, Iran. This cross-sectional study was conducted on 292 women aged ≥60 years who were randomly selected from 10 public health centers among 31 centers in south area of Tehran. Because of some limitations we randomly chose these 10 centers. The sample sizes provided 80% statistical power to meet the aim of study for test the relationship between Nutrient Adequacy Ratio (NAR), Mean Adequacy Ratio (MAR) as a dependent variable and total Dietary Diversity Score (DDS) as an independent variable. Dietary intakes were assessed by two 24-h recall questionnaires. The mean probability of adequacy across 12 nutrients and energy were calculated using the Dietary Reference Index (DRI). Dietary diversity Score was defined according to diet quality index revised (Haines et al. method). To investigate the relationship between MAR and DDS some demographic and socioeconomic variables were examined. Mean ± SD of total dietary diversity was 4.22 ± 1.28 (range 1.07-6.93). The Fruit and vegetable groups had the highest (1.27 ± 0.65, range 0-2.0) and the lowest (0.56 ± 0.36, range 0-1.71) diversity score, respectively. We observed that total DDS has significant positive correlation with MAR (r = 0.65, P < 0.001). Total DDS was significantly associated with NAR of all 12 studied nutrients (P < 0.01); probability adequacy of vitamin B2 revealed the strongest (r = 0.63, P < 0.01) and vitamin B12 revealed the weakest (r = 0.28, P < 0.01) relationship with total DDS. When maximizing sensitivity and specificity, the best cut-off point for achieving MAR≥1 was 4.5 for DDS. The results of our study showed that DDS is an appropriate indicator of the probability of nutrient adequacy in Tehranian elderly women. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Generating an Empirical Probability Distribution for the Andrews-Pregibon Statistic.
ERIC Educational Resources Information Center
Jarrell, Michele G.
A probability distribution was developed for the Andrews-Pregibon (AP) statistic. The statistic, developed by D. F. Andrews and D. Pregibon (1978), identifies multivariate outliers. It is a ratio of the determinant of the data matrix with an observation deleted to the determinant of the entire data matrix. Although the AP statistic has been used…
Variation in probability of first reproduction of Weddell seals.
Hadley, Gillian L; Rotella, Jay J; Garrott, Robert A; Nichols, James D
2006-09-01
1. For many species, when to begin reproduction is an important life-history decision that varies by individual and can have substantial implications for lifetime reproductive success and fitness. 2. We estimated age-specific probabilities of first-time breeding and modelled variation in these rates to determine age at first reproduction and understand why it varies in a population of Weddell seals in Erebus Bay, Antarctica. We used multistate mark-recapture modelling methods and encounter histories of 4965 known-age female seals to test predictions about age-related variation in probability of first reproduction and the effects of annual variation, cohort and population density. 3. Mean age at first reproduction in this southerly located study population (7.62 years of age, SD=1.71) was greater than age at first reproduction for a Weddell seal population at a more northerly and typical latitude for breeding Weddell seals (mean=4-5 years of age). This difference suggests that age at first reproduction may be influenced by whether a population inhabits the core or periphery of its range. 4. Age at first reproduction varied from 4 to 14 years, but there was no age by which all seals recruited to the breeding population, suggesting that individual heterogeneity exists among females in this population. 5. In the best model, the probability of breeding for the first time varied by age and year, and the amount of annual variation varied with age (average variance ratio for age-specific rates=4.3%). 6. Our results affirmed the predictions of life-history theory that age at first reproduction in long-lived mammals will be sensitive to environmental variation. In terms of life-history evolution, this variability suggests that Weddell seals display flexibility in age at first reproduction in order to maximize reproductive output under varying environmental conditions. Future analyses will attempt to test predictions regarding relationships between environmental covariates and annual variation in age at first reproduction and evaluate the relationship between age at first reproduction and lifetime reproductive success.
Diagnosis of adrenal insufficiency.
Dorin, Richard I; Qualls, Clifford R; Crapo, Lawrence M
2003-08-05
The cosyntropin stimulation test is the initial endocrine evaluation of suspected primary or secondary adrenal insufficiency. To critically review the utility of the cosyntropin stimulation test for evaluating adrenal insufficiency. The MEDLINE database was searched from 1966 to 2002 for all English-language papers related to the diagnosis of adrenal insufficiency. Studies with fewer than 5 persons with primary or secondary adrenal insufficiency or with fewer than 10 persons as normal controls were excluded. For secondary adrenal insufficiency, only studies that stratified participants by integrated tests of adrenal function were included. Summary receiver-operating characteristic (ROC) curves were generated from all studies that provided sensitivity and specificity data for 250-microg and 1-microg cosyntropin tests; these curves were then compared by using area under the curve (AUC) methods. All estimated values are given with 95% CIs. At a specificity of 95%, sensitivities were 97%, 57%, and 61% for summary ROC curves in tests for primary adrenal insufficiency (250-microg cosyntropin test), secondary adrenal insufficiency (250-microg cosyntropin test), and secondary adrenal insufficiency (1-microg cosyntropin test), respectively. The area under the curve for primary adrenal insufficiency was significantly greater than the AUC for secondary adrenal insufficiency for the high-dose cosyntropin test (P < 0.001), but AUCs for the 250-microg and 1-microg cosyntropin tests did not differ significantly (P > 0.5) for secondary adrenal insufficiency. At a specificity of 95%, summary ROC analysis for the 250-microg cosyntropin test yielded a positive likelihood ratio of 11.5 (95% CI, 8.7 to 14.2) and a negative likelihood ratio of 0.45 (CI, 0.30 to 0.60) for the diagnosis of secondary adrenal insufficiency. Cortisol response to cosyntropin varies considerably among healthy persons. The cosyntropin test performs well in patients with primary adrenal insufficiency, but the lower sensitivity in patients with secondary adrenal insufficiency necessitates use of tests involving stimulation of the hypothalamus if the pretest probability is sufficiently high. The operating characteristics of the 250-microg and 1-microg cosyntropin tests are similar.
Reliability and validity of the new Tanaka B Intelligence Scale scores: a group intelligence test.
Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio
2014-01-01
The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2 ± 0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach's alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. The Cronbach's alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85-0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9-48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03-0.4). Thus, intellectual disability could be ruled out or determined. The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult.
Reliability and Validity of the New Tanaka B Intelligence Scale Scores: A Group Intelligence Test
Uno, Yota; Mizukami, Hitomi; Ando, Masahiko; Yukihiro, Ryoji; Iwasaki, Yoko; Ozaki, Norio
2014-01-01
Objective The present study evaluated the reliability and concurrent validity of the new Tanaka B Intelligence Scale, which is an intelligence test that can be administered on groups within a short period of time. Methods The new Tanaka B Intelligence Scale and Wechsler Intelligence Scale for Children-Third Edition were administered to 81 subjects (mean age ± SD 15.2±0.7 years) residing in a juvenile detention home; reliability was assessed using Cronbach’s alpha coefficient, and concurrent validity was assessed using the one-way analysis of variance intraclass correlation coefficient. Moreover, receiver operating characteristic analysis for screening for individuals who have a deficit in intellectual function (an FIQ<70) was performed. In addition, stratum-specific likelihood ratios for detection of intellectual disability were calculated. Results The Cronbach’s alpha for the new Tanaka B Intelligence Scale IQ (BIQ) was 0.86, and the intraclass correlation coefficient with FIQ was 0.83. Receiver operating characteristic analysis demonstrated an area under the curve of 0.89 (95% CI: 0.85–0.96). In addition, the stratum-specific likelihood ratio for the BIQ≤65 stratum was 13.8 (95% CI: 3.9–48.9), and the stratum-specific likelihood ratio for the BIQ≥76 stratum was 0.1 (95% CI: 0.03–0.4). Thus, intellectual disability could be ruled out or determined. Conclusion The present results demonstrated that the new Tanaka B Intelligence Scale score had high reliability and concurrent validity with the Wechsler Intelligence Scale for Children-Third Edition score. Moreover, the post-test probability for the BIQ could be calculated when screening for individuals who have a deficit in intellectual function. The new Tanaka B Intelligence Test is convenient and can be administered within a variety of settings. This enables evaluation of intellectual development even in settings where performing intelligence tests have previously been difficult. PMID:24940880
Boyle, Stephen H; Samad, Zainab; Becker, Richard C; Williams, Redford; Kuhn, Cynthia; Ortel, Thomas L; Kuchibhatla, Maragatha; Prybol, Kevin; Rogers, Joseph; O'Connor, Christopher; Velazquez, Eric J; Jiang, Wei
2013-01-01
The aim of this study was to examine the associations between depressive symptoms and mental stress-induced myocardial ischemia (MSIMI) in patients with coronary heart disease (CHD). Adult patients with documented CHD were recruited for baseline mental stress and exercise stress screening testing as a part of the enrollment process of the Responses of Myocardial Ischemia to Escitalopram Treatment trial. Patients were administered the Beck Depression Inventory II and the Center for Epidemiologic Studies Depression Scale. After a 24-48-hour β-blocker withdrawal, participants completed three mental stress tests followed by a treadmill exercise test. Ischemia was defined as a) any development or worsening of any wall motion abnormality and b) reduction of left ventricular ejection fraction at least 8% by transthoracic echocardiography and/or ischemic ST-segment change by electrocardiography during stress testing. MSIMI was considered present when ischemia occurred in at least one mental test. Data were analyzed using logistic regression adjusting for age, sex, and resting left ventricular ejection fraction. One hundred twenty-five (44.2%) of 283 patients were found to have MSIMI, and 93 (32.9%) had ESIMI. Unadjusted analysis showed that Beck Depression Inventory II scores were positively associated with the probability of MSIMI (odds ratio = 0.1.30: 95% confidence interval = 1.06-1.60, p = .013) and number of MSIMI-positive tasks (all p < .005). These associations were still significant after adjustment for covariates (p values <.05). In patients with CHD, depressive symptoms were associated with a higher probability of MSIMI. These observations may enhance our understanding of the mechanisms contributing to the association of depressive symptoms to future cardiovascular events. Trial Registration Clinicaltrials.gov identifier: NCT00574847.
Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E
2006-03-01
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.
Critical Values for Lawshe's Content Validity Ratio: Revisiting the Original Methods of Calculation
ERIC Educational Resources Information Center
Ayre, Colin; Scally, Andrew John
2014-01-01
The content validity ratio originally proposed by Lawshe is widely used to quantify content validity and yet methods used to calculate the original critical values were never reported. Methods for original calculation of critical values are suggested along with tables of exact binomial probabilities.
Ramírez-Benavides, William; Monge-Nájera, Julián; Chavarría, Juan B
2009-09-01
The fig pollinating wasps (Hymenoptera: Agaonidae) have obligate arrhenotoky and a breeding structure that fits local mate competition (LMC). It has been traditionally assumed that LMC organisms adjust the sex ratio by laying a greater proportion of male eggs when there is superparasitism (several foundresses in a host). We tested the assumption with two wasp species, Pegoscapus silvestrii, pollinator of Ficus pertusa and Pegoscapus tonduzi, pollinator of Ficus eximia (= F citrifolia), in the Central Valley of Costa Rica. Total number of wasps and seeds were recorded in individual isolated naturally colonized syconia. There was a constant additive effect between the number of foundresses and the number of males produced in the brood of a syconium, while the number of females decreased. Both wasp species seem to have precise sex ratios and probably lay the male eggs first in the sequence, independently of superparasitism and clutch size: consequently, they have a non-random sex allocation. Each syconium of Ficus pertusa and of F. eximia colonized by one foundress had similar mean numbers of females, males, and seeds. The two species of wasps studied do not seem to adjust the sex ratio when there is superparasitism. Pollinating fig wasp behavior is better explained by those models not assuming that females do mathematical calculations according to other females' sex ratios, size, number of foundresses, genetic constitution, clutch size or environmental conditions inside the syconium. Our results are in agreement with the constant male number hypothesis, not with sex ratio games.
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessel (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Sometimes lifetime testing is performed on an actual COPV in service in an effort to validate the reliability model that is the basis for certifying the continued flight worthiness of its sisters. Currently, testing of such a Kevlar49(registered TradeMark)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the data base and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one nine , that is, reducing the probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several would be necessary.
Kwon, Jennie H; Reske, Kimberly A; Hink, Tiffany; Burnham, C A; Dubberke, Erik R
2017-02-01
The objective of this study was to evaluate the clinical characteristics and outcomes of hospitalized patients tested for Clostridium difficile and determine the correlation between pretest probability for C. difficile infection (CDI) and assay results. Patients with testing ordered for C. difficile were enrolled and assigned a high, medium, or low pretest probability of CDI based on clinical evaluation, laboratory, and imaging results. Stool was tested for C. difficile by toxin enzyme immunoassay (EIA) and toxigenic culture (TC). Chi-square analyses and the log rank test were utilized. Among the 111 patients enrolled, stool samples from nine were TC positive and four were EIA positive. Sixty-one (55%) patients had clinically significant diarrhea, 19 (17%) patients did not, and clinically significant diarrhea could not be determined for 31 (28%) patients. Seventy-two (65%) patients were assessed as having a low pretest probability of having CDI, 34 (31%) as having a medium probability, and 5 (5%) as having a high probability. None of the patients with low pretest probabilities had a positive EIA, but four were TC positive. None of the seven patients with a positive TC but a negative index EIA developed CDI within 30 days after the index test or died within 90 days after the index toxin EIA date. Pretest probability for CDI should be considered prior to ordering C. difficile testing and must be taken into account when interpreting test results. CDI is a clinical diagnosis supported by laboratory data, and the detection of toxigenic C. difficile in stool does not necessarily confirm the diagnosis of CDI. Copyright © 2017 American Society for Microbiology.
Clinical Phenotype of Dementia after Traumatic Brain Injury
Sayed, Nasreen; Culver, Carlee; Dams-O'Connor, Kristen; Hammond, Flora
2013-01-01
Abstract Traumatic brain injury (TBI) in early to mid-life is associated with an increased risk of dementia in late life. It is unclear whether TBI results in acceleration of Alzheimer's disease (AD)-like pathology or has features of another dementing condition, such as chronic traumatic encephalopathy, which is associated with more-prominent mood, behavior, and motor disturbances than AD. Data from the National Alzheimer's Coordinating Center (NACC) Uniform Data Set was obtained over a 5-year period. Categorical data were analyzed using Fisher's exact test. Continuous parametric data were analyzed using the Student's t-test. Nonparametric data were analyzed using Mann-Whitney's test. Overall, 877 individuals with dementia who had sustained TBI were identified in the NACC database. Only TBI with chronic deficit or dysfunction was associated with increased risk of dementia. Patients with dementia after TBI (n=62) were significantly more likely to experience depression, anxiety, irritability, and motor disorders than patients with probable AD. Autopsy data were available for 20 of the 62 TBI patients. Of the patients with TBI, 62% met National Institute of Aging-Reagan Institute “high likelihood” criteria for AD. We conclude that TBI with chronic deficit or dysfunction is associated with an increased odds ratio for dementia. Clinically, patients with dementia associated with TBI were more likely to have symptoms of depression, agitation, irritability, and motor dysfunction than patients with probable AD. These findings suggest that dementia in individuals with a history of TBI may be distinct from AD. PMID:23374007
Giudicessi, John R; Ackerman, Michael J
2013-01-01
In this review, we summarize the basic principles governing rare variant interpretation in the heritable cardiac arrhythmia syndromes, focusing on recent advances that have led to disease-specific approaches to the interpretation of positive genetic testing results. Elucidation of the genetic substrates underlying heritable cardiac arrhythmia syndromes has unearthed new arrhythmogenic mechanisms and given rise to a number of clinically meaningful genotype-phenotype correlations. As such, genetic testing for these disorders now carries important diagnostic, prognostic, and therapeutic implications. Recent large-scale systematic studies designed to explore the background genetic 'noise' rate associated with these genetic tests have provided important insights and enhanced how positive genetic testing results are interpreted for these potentially lethal, yet highly treatable, cardiovascular disorders. Clinically available genetic tests for heritable cardiac arrhythmia syndromes allow the identification of potentially at-risk family members and contribute to the risk-stratification and selection of therapeutic interventions in affected individuals. The systematic evaluation of the 'signal-to-noise' ratio associated with these genetic tests has proven critical and essential to assessing the probability that a given variant represents a rare pathogenic mutation or an equally rare, yet innocuous, genetic bystander.
Environmental test of the BGO calorimeter for DArk Matter Particle Explorer
NASA Astrophysics Data System (ADS)
Hu, Yi-Ming; Chang, Jin; Chen, Deng-Yi; Guo, Jian-Hua; Zhang, Yun-Long; Feng, Chang-Qing
2016-11-01
DArk Matter Particle Explorer (DAMPE) is the first Chinese astronomical satellite, successfully launched on Dec. 17 2015. As the most important payload of DAMPE, the BGO calorimeter contains 308 bismuth germanate crystals, with 616 photomultiplier tubes, one coupled to each end of every crystal. Environmental tests have been carried out to explore the environmental adaptability of the flight model of the BGO calorimeter. In this work we report the results of the vibration tests. During the vibration tests, no visible damage occurred in the mechanical assembly. After random or sinusoidal vibrations, the change of the first order natural frequency of BGO calorimeter during the modal surveys is less than 5%. The shift ratio of Most Probable Value of MIPs changes in cosmic-ray tests are shown, the mean value of which is about -4%. The comparison of results of cosmic-ray tests before and after the vibration shows no significant change in the performance of the BGO calorimeter. All these results suggest that the calorimeter and its structure have passed through the environment tests successfully. Supported by National Natural Science Foundation of China (11203090, 11003051, 11273070) and Strategic Priority Research Program on Space Science of Chinese Academy of Sciences (XDA04040202)
Galli, Marco; Ciriello, Vincenzo; Menghi, Amerigo; Aulisa, Angelo G; Rabini, Alessia; Marzetti, Emanuele
2013-06-01
To assess the interobserver concordance of the joint line tenderness (JLT) and McMurray tests, and to determine their diagnostic efficiency for the detection of meniscal lesions. Prospective observational study. Orthopedics outpatient clinic, university hospital. Patients (N=60) with suspected nonacute meniscal lesions who underwent knee arthroscopy. Not applicable. Patients were examined by 3 independent observers with graded levels of experience (>10y, 3y, and 4mo of practice). The interobserver concordance was assessed by Cohen-Fleiss κ statistics. Accuracy, negative and positive predictive values for prevalence 10% to 90%, positive (LR+) and negative (LR-) likelihood ratios, and the Bayesian posttest probability with a positive or negative result were also determined. The diagnostic value of the 2 tests combined was assessed by logistic regression. Arthroscopy was used as the reference test. No interobserver concordance was determined for the JLT. The McMurray test showed higher interobserver concordance, which improved when judgments by the less experienced examiner were discarded. The whole series studied by the "best" examiner (experienced orthopedist) provided the following values: (1) JLT: sensitivity, 62.9%; specificity, 50%; LR+, 1.26; LR-, .74; (2) McMurray: sensitivity, 34.3%; specificity, 86.4%; LR+, 2.52; LR-, .76. The combination of the 2 tests did not offer advantages over the McMurray alone. The JLT alone is of little clinical usefulness. A negative McMurray test does not modify the pretest probability of a meniscal lesion, while a positive result has a fair predictive value. Hence, in a patient with a suspected meniscal lesion, a positive McMurray test indicates that arthroscopy should be performed. In case of a negative result, further examinations, including imaging, are needed. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Bisoffi, Zeno; Tinto, Halidou; Sirima, Bienvenu Sodiomon; Gobbi, Federico; Angheben, Andrea; Buonfrate, Dora; Van den Ende, Jef
2013-01-01
Background In Burkina Faso, rapid diagnostic tests for malaria have been made recently available. Previously, malaria was managed clinically. This study aims at assessing which is the best management option of a febrile patient in a hyperendemic setting. Three alternatives are: treating presumptively, testing, or refraining from both test and treatment. The test threshold is the tradeoff between refraining and testing, the test-treatment threshold is the tradeoff between testing and treating. Only if the disease probability lies between the two should the test be used. Methods and Findings Data for this analysis was obtained from previous studies on malaria rapid tests, involving 5220 patients. The thresholds were calculated, based on disease risk, treatment risk and cost, test accuracy and cost. The thresholds were then matched against the disease probability. For a febrile child under 5 in the dry season, the pre-test probability of clinical malaria (3.2%), was just above the test/treatment threshold. In the rainy season, that probability was 63%, largely above the test/treatment threshold. For febrile children >5 years and adults in the dry season, the probability was 1.7%, below the test threshold, while in the rainy season it was higher (25.1%), and situated between the two thresholds (3% and 60.9%), only if costs were not considered. If they were, neither testing nor treating with artemisinin combination treatments (ACT) would be recommended. Conclusions A febrile child under 5 should be treated presumptively. In the dry season, the probability of clinical malaria in adults is so low, that neither testing nor treating with any regimen should be recommended. In the rainy season, if costs are considered, a febrile adult should not be tested, nor treated with ACT, but a possible alternative would be a presumptive treatment with amodiaquine plus sulfadoxine-pyrimethamine. If costs were not considered, testing would be recommended. PMID:23472129
Fu, Lanxing; Aspinall, Peter; Bennett, Gary; Magidson, Jay; Tatham, Andrew J
2017-04-01
To quantify the influence of spectral domain optical coherence tomography (SDOCT) on decision-making in patients with suspected glaucoma. A prospective cross-sectional study involving 40 eyes of 20 patients referred by community optometrists due to suspected glaucoma. All patients had disc photographs and standard automated perimetry (SAP), and results were presented to 13 ophthalmologists who estimated pre-test probability of glaucoma (0-100%) for a total of 520 observations. Ophthalmologists were then permitted to modify probabilities of disease based on SDOCT retinal nerve fiber layer (RNFL) measurements (post-test probability). The effect of information from SDOCT on decision to treat, monitor, or discharge was assessed. Agreement among graders was assessed using intraclass correlation coefficients (ICC) and correlated component regression (CCR) was used to identify variables influencing management decisions. Patients had an average age of 69.0 ± 10.1 years, SAP mean deviation of 2.71 ± 3.13 dB, and RNFL thickness of 86.2 ± 16.7 μm. Average pre-test probability of glaucoma was 37.0 ± 33.6% with SDOCT resulting in a 13.3 ± 18.1% change in estimated probability. Incorporating information from SDOCT improved agreement regarding probability of glaucoma (ICC = 0.50 (95% CI 0.38 to 0.64) without SDOCT versus 0.64 (95% CI 0.52 to 0.76) with SDOCT). SDOCT led to a change from decision to "treat or monitor" to "discharge" in 22 of 520 cases and a change from "discharge" to "treat or monitor" in 11 of 520 cases. Pre-test probability and RNFL thickness were predictors of post-test probability of glaucoma, contributing 69 and 31% of the variance in post-test probability, respectively. Information from SDOCT altered estimated probability of glaucoma and improved agreement among clinicians in those suspected of having the disease.
NASA Astrophysics Data System (ADS)
Mandal, S.; Choudhury, B. U.
2015-07-01
Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.
The internal consistency of the standard gamble: tests after adjusting for prospect theory.
Oliver, Adam
2003-07-01
This article reports a study that tests whether the internal consistency of the standard gamble can be improved upon by incorporating loss weighting and probability transformation parameters in the standard gamble valuation procedure. Five alternatives to the standard EU formulation are considered: (1) probability transformation within an EU framework; and, within a prospect theory framework, (2) loss weighting and full probability transformation, (3) no loss weighting and full probability transformation, (4) loss weighting and no probability transformation, and (5) loss weighting and partial probability transformation. Of the five alternatives, only the prospect theory formulation with loss weighting and no probability transformation offers an improvement in internal consistency over the standard EU valuation procedure.
Obrist, Seraina; Rogan, Slavko; Hilfiker, Roger
2016-01-01
Introduction. Falls are frequent in older adults and may have serious consequences but awareness of fall-risk is often low. A questionnaire might raise awareness of fall-risk; therefore we set out to construct and test such a questionnaire. Methods. Fall-risk factors and their odds ratios were extracted from meta-analyses and a questionnaire was devised to cover these risk factors. A formula to estimate the probability of future falls was set up using the extracted odds ratios. The understandability of the questionnaire and discrimination and calibration of the prediction formula were tested in a cohort study with a six-month follow-up. Community-dwelling persons over 60 years were recruited by an e-mail snowball-sampling method. Results and Discussion. We included 134 persons. Response rates for the monthly fall-related follow-up varied between the months and ranged from low 38% to high 90%. The proportion of present risk factors was low. Twenty-five participants reported falls. Discrimination was moderate (AUC: 0.67, 95% CI 0.54 to 0.81). The understandability, with the exception of five questions, was good. The wording of the questions needs to be improved and measures to increase the monthly response rates are needed before test-retest reliability and final predictive value can be assessed. PMID:27247571
NASA Technical Reports Server (NTRS)
Metzger, F. B.; Menthe, R. W.; Mccolgan, C. J.
1980-01-01
A limited study has been conducted to establish the performance and noise characteristics of a low design tip speed (168 m/s, 550 ft/sec) low pressure ratio (1.04) variable pitch fan which was tested in the Langley 30 X 60 tunnel. This fan was designed for minimum noise when installed in the tail mount location of a twin engine aircraft which normally has both nose and tail mounted propulsors. Measurements showed the fan noise to be very close to predictions made during the design of the fan and extremely low in level (65 dBA at 1000 ft) with no acoustic treatment. This is about 8 dB lower than the unshrouded 2 blade propeller normally used in this installation. On the basis of tests conducted during this program, it appears that this level could be further reduced by 2 dBA if optimized acoustic treatments were installed in the fan duct. Even the best of the shrouded propellers tested previously were 7 dB higher in level than the Q-Fan without acoustic treatment. It was found that the cruise performance of this fan was within 5% of the predicted efficiency of 72%. Evaluation of the performance data indicated that disturbances in the inflow to the fan were the probable cause of the reduced performance.
The Probability of Obtaining Two Statistically Different Test Scores as a Test Index
ERIC Educational Resources Information Center
Muller, Jorg M.
2006-01-01
A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…
Investigating Gender Differences under Time Pressure in Financial Risk Taking.
Xie, Zhixin; Page, Lionel; Hardy, Ben
2017-01-01
There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success.
Labronici, Pedro José; Ferreira, Leonardo Termis; Dos Santos Filho, Fernando Claudino; Pires, Robinson Esteves Santos; Gomes, Davi Coutinho Fonseca Fernandes; da Silva, Luiz Henrique Penteado; Gameiro, Vinicius Schott
2017-02-01
Several so-called casting indices are available for objective evaluation of plaster cast quality. The present study sought to investigate four of these indices (gap index, padding index, Canterbury index, and three-point index) as compared to a reference standard (cast index) for evaluation of plaster cast quality after closed reduction of pediatric displaced distal forearm fractures. Forty-three radiographs from patients with displaced distal forearm fractures requiring manipulation were reviewed. Accuracy, sensitivity, specificity, false-positive probability, false-negative probability, positive predictive value, negative predictive value, positive likelihood ratio, and negative likelihood ratio were calculated for each of the tested indices. Comparison among indices revealed diagnostic agreement in only 4.7% of cases. The strongest correlation with the cast index was found for the gap index, with a Spearman correlation coefficient of 0.94. The gap index also displayed the best agreement with the cast index, with both indices yielding the same result in 79.1% of assessments. When seeking to assess plaster cast quality, the cast index and gap index should be calculated; if both indices agree, a decision on quality can be made. If the cast and gap indices disagree, the padding index can be calculated as a tiebreaker, and the decision based on the most frequent of the three results. Calculation of the three-point index and Canterbury index appears unnecessary. Copyright © 2016 Elsevier Ltd. All rights reserved.
Free-ranging dogs assess the quantity of opponents in intergroup conflicts.
Bonanni, Roberto; Natoli, Eugenia; Cafazzo, Simona; Valsecchi, Paola
2011-01-01
In conflicts between social groups, the decision of competitors whether to attack/retreat should be based on the assessment of the quantity of individuals in their own and the opposing group. Experimental studies on numerical cognition in animals suggest that they may represent both large and small numbers as noisy mental magnitudes subject to scalar variability, and small numbers (≤4) also as discrete object-files. Consequently, discriminating between large quantities, but not between smaller ones, should become easier as the asymmetry between quantities increases. Here, we tested these hypotheses by recording naturally occurring conflicts in a population of free-ranging dogs, Canis lupus familiaris, living in a suburban environment. The overall probability of at least one pack member approaching opponents aggressively increased with a decreasing ratio of the number of rivals to that of companions. Moreover, the probability that more than half of the pack members withdrew from a conflict increased when this ratio increased. The skill of dogs in correctly assessing relative group size appeared to improve with increasing the asymmetry in size when at least one pack comprised more than four individuals, and appeared affected to a lesser extent by group size asymmetries when dogs had to compare only small numbers. These results provide the first indications that a representation of quantity based on noisy mental magnitudes may be involved in the assessment of opponents in intergroup conflicts and leave open the possibility that an additional, more precise mechanism may operate with small numbers.
Performance of synchronous optical receivers using atmospheric compensation techniques.
Belmonte, Aniceto; Khan, Joseph
2008-09-01
We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.
NGS-based likelihood ratio for identifying contributors in two- and three-person DNA mixtures.
Chan Mun Wei, Joshua; Zhao, Zicheng; Li, Shuai Cheng; Ng, Yen Kaow
2018-06-01
DNA fingerprinting, also known as DNA profiling, serves as a standard procedure in forensics to identify a person by the short tandem repeat (STR) loci in their DNA. By comparing the STR loci between DNA samples, practitioners can calculate a probability of match to identity the contributors of a DNA mixture. Most existing methods are based on 13 core STR loci which were identified by the Federal Bureau of Investigation (FBI). Analyses based on these loci of DNA mixture for forensic purposes are highly variable in procedures, and suffer from subjectivity as well as bias in complex mixture interpretation. With the emergence of next-generation sequencing (NGS) technologies, the sequencing of billions of DNA molecules can be parallelized, thus greatly increasing throughput and reducing the associated costs. This allows the creation of new techniques that incorporate more loci to enable complex mixture interpretation. In this paper, we propose a computation for likelihood ratio that uses NGS (next generation sequencing) data for DNA testing on mixed samples. We have applied the method to 4480 simulated DNA mixtures, which consist of various mixture proportions of 8 unrelated whole-genome sequencing data. The results confirm the feasibility of utilizing NGS data in DNA mixture interpretations. We observed an average likelihood ratio as high as 285,978 for two-person mixtures. Using our method, all 224 identity tests for two-person mixtures and three-person mixtures were correctly identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Volpato, Lia Karina; Siqueira, Isabela Ribeiro; Nunes, Rodrigo Dias; Piovezan, Anna Paula
2018-04-01
To evaluate the association between hormonal contraception and the appearance of human papillomavirus HPV-induced lesions in the uterine cervix of patients assisted at a school outpatient clinic - ObGyn outpatient service of the Universidade do Sul de Santa Catarina. A case-control study, with women in fertile age, performed between 2012 and 2015. A total of 101 patients with cervical lesions secondary to HPV were included in the case group, and 101 patients with normal oncotic colpocytology, in the control group. The data were analyzed through the Statistical Package for the Social Sciences (SPSS, IBM Corp. Armonk, NY, US) software, version 24.0, using the 95% confidence interval. To test the homogeneity of the proportions, the chi-square (χ 2 ) test was used for the qualitative variables, and the Student t-test, for the quantitative variables. When comparing the occurrence of HPV lesions in users and non-users of combined oral contraceptives (COCs), the association with doses of 0.03 mg or higher of ethinylestradiol (EE) was observed. Thus, a higher probability of developing cervical lesions induced by HPV was identified (odds ratio [OR]: 1.9 p = 0.039); and when these cases were separated by the degree of the lesion, the probability of these patients presenting with low-grade squamous intraepithelial lesion was 2.1 times higher ( p = 0.036), but with no impact on high-grade squamous intraepithelial lesions and the occurrence of invasive cancer. No significant differences were found in the other variables analyzed. Although the results found in the present study suggest a higher probability of the users of combined hormonal contraceptives with a concentration higher than 0.03 mg of EE to develop low-grade intraepithelial lesions, more studies are needed to conclude causality. Thieme Revinter Publicações Ltda Rio de Janeiro, Brazil.
NASA Astrophysics Data System (ADS)
Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini
2014-12-01
In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.
FATTY MUSCLE INFILTRATION IN CUFF TEAR: PRE AND POST OPERATIVE EVALUATION BY MRI.
Miyazaki, Alberto Naoki; Santos, Pedro Doneux; da Silva, Luciana Andrade; Sella, Guilherme do Val; Miranda, Eduardo Régis de Alencar Bona; Zampieri, Rodrigo
2015-01-01
To evaluate the fatty infiltration and atrophy of the supraespinatus in the pre- and postoperative of a rotator cuff lesion (RCL), by MRI. Ten patients with full-thickness rotator cuff tears who had undergone surgical arthroscopic rotator cuff repair between September and December 2011 were included. This is a prospective study, with analysis and comparison of fatty infiltration and atrophy of the supraespinatus. The occupation ratio was measured using the magic selection tool in Adobe Photoshop CS3((r)) on T1 oblique sagittal Y-view MRI. Through Photoshop, the proportion occupied by the muscle belly regarding its fossae was calculated. There was a statistically significant increase in the muscle ratio (p=0.013) comparing images pre and postoperative, analyzed by the Wilcoxon T test. The proportion of the supraspinal muscle above the pit increases in the immediate postoperative period, probably due to the traction exerted on the tendon at the time of repair. Level of Evidence II, Cohort Study.
van Son, Dana; Schalbroeck, Rik; Angelidis, Angelos; van der Wee, Nic J A; van der Does, Willem; Putman, Peter
2018-05-21
Spontaneous EEG theta/beta ratio (TBR) probably marks prefrontal cortical (PFC) executive control, and its regulation of attentional threat-bias. Caffeine at moderate doses may strengthen executive control through increased PFC catecholamine action, dependent on basal PFC function. To test if caffeine affects threat-bias, moderated by baseline frontal TBR and trait-anxiety. A pictorial emotional Stroop task was used to assess threat-bias in forty female participants in a cross-over, double-blind study after placebo and 200 mg caffeine. At baseline and after placebo, comparable relations were observed for negative pictures: high TBR was related to low threat-bias in low trait-anxious people. Caffeine had opposite effects on threat-bias in low trait-anxious people with low and high TBR. This further supports TBR as a marker of executive control and highlights the importance of taking baseline executive function into consideration when studying effects of caffeine on executive functions. Copyright © 2018 Elsevier B.V. All rights reserved.
Effect of advanced component technology on helicopter transmissions
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Townsend, Dennis P.
1989-01-01
Experimental tests were performed on the NASA/Bell Helicopter Textron (BHT) 500 hp advanced technology transmission (ATT) at the NASA Lewis Research Center. The ATT was a retrofit of the OH-58C helicopter 236 kW (317 hp) main rotor transmission, upgraded to 373 kW (500 hp), with a design goal of retaining long life with a minimum increase in cost, weight, and size. Vibration, strain, efficiency, deflection, and temperature experiments were performed and the results were compared to previous experiments on the OH-58A, OH-58C, and UH-60A transmissions. The high-contact-ratio gears and the cantilevered-mounted, flexible ring gear of the ATT reduced vibration compared to that of the OH-58C. The ATT flexible ring gear improved planetary load sharing compared to that of the rigid ring gear of the UH-60A transmission. The ATT mechanical efficiency was lower than that of the OH-58A transmission, probably due to the high-contact-ratio planetary gears.
Singh, Gajinder Pal; Sharma, Amit
2016-01-01
Resistance to frontline anti-malarial drugs, including artemisinin, has repeatedly arisen in South-East Asia, but the reasons for this are not understood. Here we test whether evolutionary constraints on Plasmodium falciparum strains from South-East Asia differ from African strains. We find a significantly higher ratio of non-synonymous to synonymous polymorphisms in P. falciparum from South-East Asia compared to Africa, suggesting differences in the selective constraints on P. falciparum genome in these geographical regions. Furthermore, South-East Asian strains showed a higher proportion of non-synonymous polymorphism at conserved positions, suggesting reduced negative selection. There was a lower rate of mixed infection by multiple genotypes in samples from South-East Asia compared to Africa. We propose that a lower mixed infection rate in South-East Asia reduces intra-host competition between the parasite clones, reducing the efficiency of natural selection. This might increase the probability of fixation of fitness-reducing mutations including drug resistant ones. PMID:27853513
Determination of the gaseous hydrogen ductile-brittle transition in copper-nickel alloys
NASA Technical Reports Server (NTRS)
Parr, R. A.; Johnston, M. H.; Davis, J. H.; Oh, T. K.
1985-01-01
A series of copper-nickel alloys were fabricated, notched tensile specimens machined for each alloy, and the specimens tested in 34.5 MPa hydrogen and in air. A notched tensile ratio was determined for each alloy and the hydrogen environment embrittlement (HEE) determined for the alloys of 47.7 weight percent nickel to 73.5 weight percent nickel. Stacking fault probability and stacking fault energies were determined for each alloy using the x ray diffraction line shift and line profiles technique. Hydrogen environment embrittlement was determined to be influenced by stacking fault energies; however, the correlation is believed to be indirect and only partially responsible for the HEE behavior of these alloys.
Colorectal Cancer Screening Initiation After Age 50 Years in an Organized Program.
Fedewa, Stacey A; Corley, Douglas A; Jensen, Christopher D; Zhao, Wei; Goodman, Michael; Jemal, Ahmedin; Ward, Kevin C; Levin, Theodore R; Doubeni, Chyke A
2017-09-01
Recent studies report racial disparities among individuals in organized colorectal cancer (CRC) programs; however, there is a paucity of information on CRC screening utilization by race/ethnicity among newly age-eligible adults in such programs. This was a retrospective cohort study among Kaiser Permanente Northern California enrollees who turned age 50 years between 2007 and 2012 (N=138,799) and were served by a systemwide outreach and facilitated in-reach screening program based primarily on mailed fecal immunochemical tests to screening-eligible people. Kaplan-Meier and Cox model analyses were used to estimate differences in receipt of CRC screening in 2015-2016. Cumulative probabilities of CRC screening within 1 and 2 years of subjects' 50th birthday were 51% and 73%, respectively. Relative to non-Hispanic whites, the likelihood of completing any CRC screening was similar in blacks (hazard ratio, 0.98; 95% CI=0.96, 1.00); 5% lower in Hispanics (hazard ratio, 0.95; 95% CI=0.93, 0.96); and 13% higher in Asians (hazard ratio, 1.13; 95% CI=1.11, 1.15) in adjusted analyses. Fecal immunochemical testing was the most common screening modality, representing 86% of all screening initiations. Blacks and Hispanics had lower receipt of fecal immunochemical testing in adjusted analyses. CRC screening uptake was high among newly screening-eligible adults in an organized CRC screening program, but Hispanics were less likely to initiate screening near age 50 years than non-Hispanic whites, suggesting that cultural and other individual-level barriers not addressed within the program likely contribute. Future studies examining the influences of culturally appropriate and targeted efforts for screening initiation are needed. Copyright © 2017 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Prospective Tests of Southern California Earthquake Forecasts
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Schorlemmer, D.; Gerstenberger, M.; Kagan, Y. Y.; Helmstetter, A.; Wiemer, S.; Field, N.
2004-12-01
We are testing earthquake forecast models prospectively using likelihood ratios. Several investigators have developed such models as part of the Southern California Earthquake Center's project called Regional Earthquake Likelihood Models (RELM). Various models are based on fault geometry and slip rates, seismicity, geodetic strain, and stress interactions. Here we describe the testing procedure and present preliminary results. Forecasts are expressed as the yearly rate of earthquakes within pre-specified bins of longitude, latitude, magnitude, and focal mechanism parameters. We test models against each other in pairs, which requires that both forecasts in a pair be defined over the same set of bins. For this reason we specify a standard "menu" of bins and ground rules to guide forecasters in using common descriptions. One menu category includes five-year forecasts of magnitude 5.0 and larger. Contributors will be requested to submit forecasts in the form of a vector of yearly earthquake rates on a 0.1 degree grid at the beginning of the test. Focal mechanism forecasts, when available, are also archived and used in the tests. Interim progress will be evaluated yearly, but final conclusions would be made on the basis of cumulative five-year performance. The second category includes forecasts of earthquakes above magnitude 4.0 on a 0.1 degree grid, evaluated and renewed daily. Final evaluation would be based on cumulative performance over five years. Other types of forecasts with different magnitude, space, and time sampling are welcome and will be tested against other models with shared characteristics. Tests are based on the log likelihood scores derived from the probability that future earthquakes would occur where they do if a given forecast were true [Kagan and Jackson, J. Geophys. Res.,100, 3,943-3,959, 1995]. For each pair of forecasts, we compute alpha, the probability that the first would be wrongly rejected in favor of the second, and beta, the probability that the second would be wrongly rejected in favor of the first. Computing alpha and beta requires knowing the theoretical distribution of likelihood scores under each hypothesis, which we estimate by simulations. In this scheme, each forecast is given equal status; there is no "null hypothesis" which would be accepted by default. Forecasts and test results will be archived and posted on the RELM web site. Major problems under discussion include how to treat aftershocks, which clearly violate the variable-rate Poissonian hypotheses that we employ, and how to deal with the temporal variations in catalog completeness that follow large earthquakes.
Olson, Scott A.; Brouillette, Michael C.
2006-01-01
A logistic regression equation was developed for estimating the probability of a stream flowing intermittently at unregulated, rural stream sites in Vermont. These determinations can be used for a wide variety of regulatory and planning efforts at the Federal, State, regional, county and town levels, including such applications as assessing fish and wildlife habitats, wetlands classifications, recreational opportunities, water-supply potential, waste-assimilation capacities, and sediment transport. The equation will be used to create a derived product for the Vermont Hydrography Dataset having the streamflow characteristic of 'intermittent' or 'perennial.' The Vermont Hydrography Dataset is Vermont's implementation of the National Hydrography Dataset and was created at a scale of 1:5,000 based on statewide digital orthophotos. The equation was developed by relating field-verified perennial or intermittent status of a stream site during normal summer low-streamflow conditions in the summer of 2005 to selected basin characteristics of naturally flowing streams in Vermont. The database used to develop the equation included 682 stream sites with drainage areas ranging from 0.05 to 5.0 square miles. When the 682 sites were observed, 126 were intermittent (had no flow at the time of the observation) and 556 were perennial (had flowing water at the time of the observation). The results of the logistic regression analysis indicate that the probability of a stream having intermittent flow in Vermont is a function of drainage area, elevation of the site, the ratio of basin relief to basin perimeter, and the areal percentage of well- and moderately well-drained soils in the basin. Using a probability cutpoint (a lower probability indicates the site has perennial flow and a higher probability indicates the site has intermittent flow) of 0.5, the logistic regression equation correctly predicted the perennial or intermittent status of 116 test sites 85 percent of the time.
Chhetri, Bimal K; Berke, Olaf; Pearl, David L; Bienzle, Dorothee
2013-01-05
Although feline immunodeficiency virus (FIV) and feline leukemia virus (FeLV) have similar risk factors and control measures, infection rates have been speculated to vary in geographic distribution over North America. Since both infections are endemic in North America, it was assumed as a working hypothesis that their geographic distributions were similar. Hence, the purpose of this exploratory analysis was to investigate the comparative geographical distribution of both viral infections. Counts of FIV (n=17,108) and FeLV (n=30,017) positive serology results (FIV antibody and FeLV ELISA) were obtained for 48 contiguous states and District of Columbia of the United States of America (US) from the IDEXX Laboratories website. The proportional morbidity ratio of FIV to FeLV infection was estimated for each administrative region and its geographic distribution pattern was visualized by a choropleth map. Statistical evidence of an excess in the proportional morbidity ratio from unity was assessed using the spatial scan test under the normal probability model. This study revealed distinct spatial distribution patterns in the proportional morbidity ratio suggesting the presence of one or more relevant and geographically varying risk factors. The disease map indicates that there is a higher prevalence of FIV infections in the southern and eastern US compared to FeLV. In contrast, FeLV infections were observed to be more frequent in the western US compared to FIV. The respective excess in proportional morbidity ratio was significant with respect to the spatial scan test (p < 0.05). The observed variability in the geographical distribution of the proportional morbidity ratio of FIV to FeLV may be related to the presence of an additional or unique, but yet unknown, spatial risk factor. Putative factors may be geographic variations in specific virus strains and rate of vaccination. Knowledge of these factors and the geographical distributions of these infections can inform recommendations for testing, management and prevention. However, further studies are required to investigate the potential association of these factors with FIV and FeLV.
DEM simulation of flow of dumbbells on a rough inclined plane
NASA Astrophysics Data System (ADS)
Mandal, Sandip; Khakhar, Devang
2015-11-01
The rheology of non-spherical granular materials such as food grains, sugar cubes, sand, pharmaceutical pills, among others, is not understood well. We study the flow of non-spherical dumbbells of different aspect ratios on a rough inclined plane by using soft sphere DEM simulations. The dumbbells are generated by fusing two spheres together and a linear spring dashpot model along with Coulombic friction is employed to calculate inter-particle forces. At steady state, a uni-directional shear flow is obtained which allows for a detailed study of the rheology. The effect of aspect ratio and inclination angle on mean velocity, volume fraction, shear rate, shear stress, pressure and viscosity profiles is examined. The effect of aspect ratio on probability distribution of angles, made by the major axes of the dumbbells with the flow direction, average angle and order parameter is analyzed. The dense flow rheology is well explained by Bagnold's law and the constitutive laws of JFP model. The dependencies of first and second normal stress differences on aspect ratio are studied. The probability distributions of translational and rotational velocity are analyzed.
Branching ratios of α-decay to ground and excited states of Fm, Cf, Cm and Pu
NASA Astrophysics Data System (ADS)
Hassanabadi, H.; Hosseini, S. S.
2018-06-01
We use the well-known Wentzel-Kramers-Brillouin (WKB) barrier penetration probability to calculate α-decay branching ratios for ground and excited states of heavy even-even nuclei of Fermium (248-254Fm), Californium (244-252Cf), Curium (238-248Cm) and Plutonium (234-244Pu) with 94 ≤Zp ≤100. We obtained the branching ratios for the excited states of daughter nucleus by the α-decay energy (Qα), the angular momentum of α-particle (ℓα), and the excitation probability of the daughter nucleus with the excitation energy of state ℓ in the daughter nucleus (i.e. Eℓ*). α-Decay half-lives have been evaluated by using the proximity potential model for the heavy even-even nuclei. We have reported the half-lives and compared the results with the experimental data. The theoretical branching ratios of α-transitions in our calculation are found to agree with the available experimental data well for 0+→ 0+, 0+→ 2+, 0+→ 4+, 0+→ 6+ and 0+ → 8+α-transitions.
Ladapo, Joseph A.; Blecker, Saul; Douglas, Pamela S.
2014-01-01
Background Cardiac stress testing, particularly with imaging, has been the focus of debates about rising health care costs, inappropriate use, and patient safety in the context of radiation exposure. Objective To determine whether U.S. trends in cardiac stress test use may be attributable to population shifts in demographics, risk factors, and provider characteristics and evaluate whether racial/ethnic disparities exist in physician decision making. Design Analyses of repeated cross-sectional data. Setting National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey (1993 to 2010). Patients Adults without coronary heart disease. Measurements Cardiac stress test referrals and inappropriate use. Results Between 1993 to 1995 and 2008 to 2010, the annual number of U.S. ambulatory visits in which a cardiac stress test was ordered or performed increased from 28 per 10 000 visits to 45 per 10 000 visits. No trend was found toward more frequent testing after adjustment for patient characteristics, risk factors, and provider characteristics (P = 0.134). Cardiac stress tests with imaging comprised a growing portion of all tests, increasing from 59% in 1993 to 1995 to 87% in 2008 to 2010. At least 34.6% were probably inappropriate, with associated annual costs and harms of $501 million and 491 future cases of cancer. Authors found no evidence of a lower likelihood of black patients receiving a cardiac stress test (odds ratio, 0.91 [95% CI, 0.69 to 1.21]) than white patients, although some evidence of disparity in Hispanic patients was found (odds ratio, 0.75 [CI, 0.55 to 1.02]). Limitations Cross-sectional design with limited clinical data. Conclusion National growth in cardiac stress test use can largely be explained by population and provider characteristics, but use of imaging cannot. Physician decision making about cardiac stress test use does not seem to contribute to racial/ethnic disparities in cardiovascular disease. PMID:25285541
Statistics, Handle with Care: Detecting Multiple Model Components with the Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Protassov, Rostislav; van Dyk, David A.; Connors, Alanna; Kashyap, Vinay L.; Siemiginowska, Aneta
2002-05-01
The likelihood ratio test (LRT) and the related F-test, popularized in astrophysics by Eadie and coworkers in 1971, Bevington in 1969, Lampton, Margon, & Bowyer, in 1976, Cash in 1979, and Avni in 1978, do not (even asymptotically) adhere to their nominal χ2 and F-distributions in many statistical tests common in astrophysics, thereby casting many marginal line or source detections and nondetections into doubt. Although the above authors illustrate the many legitimate uses of these statistics, in some important cases it can be impossible to compute the correct false positive rate. For example, it has become common practice to use the LRT or the F-test to detect a line in a spectral model or a source above background despite the lack of certain required regularity conditions. (These applications were not originally suggested by Cash or by Bevington.) In these and other settings that involve testing a hypothesis that is on the boundary of the parameter space, contrary to common practice, the nominal χ2 distribution for the LRT or the F-distribution for the F-test should not be used. In this paper, we characterize an important class of problems in which the LRT and the F-test fail and illustrate this nonstandard behavior. We briefly sketch several possible acceptable alternatives, focusing on Bayesian posterior predictive probability values. We present this method in some detail since it is a simple, robust, and intuitive approach. This alternative method is illustrated using the gamma-ray burst of 1997 May 8 (GRB 970508) to investigate the presence of an Fe K emission line during the initial phase of the observation. There are many legitimate uses of the LRT and the F-test in astrophysics, and even when these tests are inappropriate, there remain several statistical alternatives (e.g., judicious use of error bars and Bayes factors). Nevertheless, there are numerous cases of the inappropriate use of the LRT and similar tests in the literature, bringing substantive scientific results into question.
Combining and comparing neutrinoless double beta decay experiments using different nuclei
NASA Astrophysics Data System (ADS)
Bergström, Johannes
2013-02-01
We perform a global fit of the most relevant neutrinoless double beta decay experiments within the standard model with massive Majorana neutrinos. Using Bayesian inference makes it possible to take into account the theoretical uncertainties on the nuclear matrix elements in a fully consistent way. First, we analyze the data used to claim the observation of neutrinoless double beta decay in 76Ge, and find strong evidence (according to Jeffrey's scale) for a peak in the spectrum and moderate evidence for that the peak is actually close to the energy expected for the neutrinoless decay. We also find a significantly larger statistical error than the original analysis, which we include in the comparison with other data. Then, we statistically test the consistency between this claim with that of recent measurements using 136Xe. We find that the two data sets are about 40 to 80 times more probable under the assumption that they are inconsistent, depending on the nuclear matrix element uncertainties and the prior on the smallest neutrino mass. Hence, there is moderate to strong evidence of incompatibility, and for equal prior probabilities the posterior probability of compatibility is between 1.3% and 2.5%. If one, despite such evidence for incompatibility, combines the two data sets, we find that the total evidence of neutrinoless double beta decay is negligible. If one ignores the claim, there is weak evidence against the existence of the decay. We also perform approximate frequentist tests of compatibility for fixed ratios of the nuclear matrix elements, as well as of the no signal hypothesis. Generalization to other sets of experiments as well as other mechanisms mediating the decay is possible.
Losses in chopper-controlled DC series motors
NASA Technical Reports Server (NTRS)
Hamilton, H. B.
1982-01-01
Motors for electric vehicle (EV) applications must have different features than dc motors designed for industrial applications. The EV motor application is characterized by the following requirements: (1) the need for highest possible efficiency from light load to overload, for maximum EV range, (2) large short time overload capability (The ratio of peak to average power varies from 5/1 in heavy city traffic to 3/1 in suburban driving situations) and (3) operation from power supply voltage levels of 84 to 144 volts (probably 120 volts maximum). A test facility utilizing a dc generator as a substitute for a battery pack was designed and utilized. Criteria for the design of such a facility are presented. Two motors, differing in design detail, commercially available for EV use were tested. Losses measured are discussed, as are waves forms and their harmonic content, the measurements of resistance and inductance, EV motor/chopper application criteria, and motor design considerations.
Characteristics of Five Propellers in Flight
NASA Technical Reports Server (NTRS)
Crowley, J W , Jr; Mixson, R E
1928-01-01
This investigation was made for the purpose of determining the characteristics of five full-scale propellers in flight. The equipment consisted of five propellers in conjunction with a VE-7 airplane and a Wright E-2 engine. The propellers were of the same diameter and aspect ratio. Four of them differed uniformly in thickness and pitch and the fifth propeller was identical with one of the other four with exception of a change of the airfoil section. The propeller efficiencies measured in flight are found to be consistently lower than those obtained in model tests. It is probable that this is mainly a result of the higher tip speeds used in the full-scale tests. The results show also that because of differences in propeller deflections it is difficult to obtain accurate comparisons of propeller characteristics. From this it is concluded that for accurate comparisons it is necessary to know the propeller pitch angles under actual operating conditions. (author)
The effectiveness of the biodegradation of raw and processed polystyrene by mealworms
NASA Astrophysics Data System (ADS)
Leluk, Karol; Hanus-Lorenz, Beata; Rybak, Justyna; Bożek, Magdalena
2017-11-01
In our studies biodegradation of four variants of polystyrene was performed. We tested: raw material (PS), processed polystyrene (PSr), building insulation material (EPS) and food packaging boxes (PSp). Materials were characterized by means melt flow ratio (MFR), shore hardness and gloss. The biochemical assessment of macromolecules (proteins, lipids and sugars) in the mealworms organisms fed with tested forms of polystyrene allowed us to set how efficient and beneficial the biodegradation of types of polystyrene is. We also evaluated the variability of bacterial community in larval guts by the use of denaturing gradient gel electrophoresis (DGGE) on the bacterial DNA of 16S rRNA genes amplified in polymerase chain reaction (PCR). The results suggest that EPS and PSp polystyrene are the most digestible for T. molitor larvae. The metabolic degradation of polystyrene is probably strictly connected with the changes in biodiversity of gut bacteria.
Bertoldi, Eduardo G; Stella, Steffan F; Rohde, Luis E; Polanczyk, Carisi A
2016-05-01
Several tests exist for diagnosing coronary artery disease, with varying accuracy and cost. We sought to provide cost-effectiveness information to aid physicians and decision-makers in selecting the most appropriate testing strategy. We used the state-transitions (Markov) model from the Brazilian public health system perspective with a lifetime horizon. Diagnostic strategies were based on exercise electrocardiography (Ex-ECG), stress echocardiography (ECHO), single-photon emission computed tomography (SPECT), computed tomography coronary angiography (CTA), or stress cardiac magnetic resonance imaging (C-MRI) as the initial test. Systematic review provided input data for test accuracy and long-term prognosis. Cost data were derived from the Brazilian public health system. Diagnostic test strategy had a small but measurable impact in quality-adjusted life-years gained. Switching from Ex-ECG to CTA-based strategies improved outcomes at an incremental cost-effectiveness ratio of 3100 international dollars per quality-adjusted life-year. ECHO-based strategies resulted in cost and effectiveness almost identical to CTA, and SPECT-based strategies were dominated because of their much higher cost. Strategies based on stress C-MRI were most effective, but the incremental cost-effectiveness ratio vs CTA was higher than the proposed willingness-to-pay threshold. Invasive strategies were dominant in the high pretest probability setting. Sensitivity analysis showed that results were sensitive to costs of CTA, ECHO, and C-MRI. Coronary CT is cost-effective for the diagnosis of coronary artery disease and should be included in the Brazilian public health system. Stress ECHO has a similar performance and is an acceptable alternative for most patients, but invasive strategies should be reserved for patients at high risk. © 2016 Wiley Periodicals, Inc.
An improved PRoPHET routing protocol in delay tolerant network.
Han, Seung Deok; Chung, Yun Won
2015-01-01
In delay tolerant network (DTN), an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity) protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.
The Probable Ages of Asteroid Families
NASA Technical Reports Server (NTRS)
Harris, A. W.
1993-01-01
There has been considerable debate recently over the ages of the Hirayama families, and in particular if some of the families are very oung(u) It is a straightforward task to estimate the characteristic time of a collision between a body of a given diameter, d_o, by another body of diameter greater of equal to d_1. What is less straightforward is to estimate the critical diameter ratio, d_1/d_o, above which catastrophic disruption occurs, from which one could infer probable ages of the Hirayama families, by knowing the diameter of the parent body, d_o. One can gain some insight into the probable value of d_1/d_o, and of the likely ages of existing families, from the plot below. I have computed the characteristic time between collisions in the asteroid belt of a size ratio greater of equal to d_1/d_o, for 4 sizes of target asteroids, d_o. The solid curves to the lower right are the characteristic times for a single object...
Probing the statistics of transport in the Hénon Map
NASA Astrophysics Data System (ADS)
Alus, O.; Fishman, S.; Meiss, J. D.
2016-09-01
The phase space of an area-preserving map typically contains infinitely many elliptic islands embedded in a chaotic sea. Orbits near the boundary of a chaotic region have been observed to stick for long times, strongly influencing their transport properties. The boundary is composed of invariant "boundary circles." We briefly report recent results of the distribution of rotation numbers of boundary circles for the Hénon quadratic map and show that the probability of occurrence of small integer entries of their continued fraction expansions is larger than would be expected for a number chosen at random. However, large integer entries occur with probabilities distributed proportionally to the random case. The probability distributions of ratios of fluxes through island chains is reported as well. These island chains are neighbours in the sense of the Meiss-Ott Markov-tree model. Two distinct universality families are found. The distributions of the ratio between the flux and orbital period are also presented. All of these results have implications for models of transport in mixed phase space.
Incorporating uncertainty into medical decision making: an approach to unexpected test results.
Bianchi, Matt T; Alexander, Brian M; Cash, Sydney S
2009-01-01
The utility of diagnostic tests derives from the ability to translate the population concepts of sensitivity and specificity into information that will be useful for the individual patient: the predictive value of the result. As the array of available diagnostic testing broadens, there is a temptation to de-emphasize history and physical findings and defer to the objective rigor of technology. However, diagnostic test interpretation is not always straightforward. One significant barrier to routine use of probability-based test interpretation is the uncertainty inherent in pretest probability estimation, the critical first step of Bayesian reasoning. The context in which this uncertainty presents the greatest challenge is when test results oppose clinical judgment. It is this situation when decision support would be most helpful. The authors propose a simple graphical approach that incorporates uncertainty in pretest probability and has specific application to the interpretation of unexpected results. This method quantitatively demonstrates how uncertainty in disease probability may be amplified when test results are unexpected (opposing clinical judgment), even for tests with high sensitivity and specificity. The authors provide a simple nomogram for determining whether an unexpected test result suggests that one should "switch diagnostic sides.'' This graphical framework overcomes the limitation of pretest probability uncertainty in Bayesian analysis and guides decision making when it is most challenging: interpretation of unexpected test results.
Development of a methodology to evaluate material accountability in pyroprocess
NASA Astrophysics Data System (ADS)
Woo, Seungmin
This study investigates the effect of the non-uniform nuclide composition in spent fuel on material accountancy in the pyroprocess. High-fidelity depletion simulations are performed using the Monte Carlo code SERPENT in order to determine nuclide composition as a function of axial and radial position within fuel rods and assemblies, and burnup. For improved accuracy, the simulations use short burnups step (25 days or less), Xe-equilibrium treatment (to avoid oscillations over burnup steps), axial moderator temperature distribution, and 30 axial meshes. Analytical solutions of the simplified depletion equations are built to understand the axial non-uniformity of nuclide composition in spent fuel. The cosine shape of axial neutron flux distribution dominates the axial non-uniformity of the nuclide composition. Combined cross sections and time also generate axial non-uniformity, as the exponential term in the analytical solution consists of the neutron flux, cross section and time. The axial concentration distribution for a nuclide having the small cross section gets steeper than that for another nuclide having the great cross section because the axial flux is weighted by the cross section in the exponential term in the analytical solution. Similarly, the non-uniformity becomes flatter as increasing burnup, because the time term in the exponential increases. Based on the developed numerical recipes and decoupling of the results between the axial distributions and the predetermined representative radial distributions by matching the axial height, the axial and radial composition distributions for representative spent nuclear fuel assemblies, the Type-0, -1, and -2 assemblies after 1, 2, and 3 depletion cycles, is obtained. These data are appropriately modified to depict processing for materials in the head-end process of pyroprocess that is chopping, voloxidation and granulation. The expectation and standard deviation of the Pu-to-244Cm-ratio by the single granule sampling calculated by the central limit theorem and the Geary-Hinkley transformation. Then, the uncertainty propagation through the key-pyroprocess is conducted to analyze the Material Unaccounted For (MUF), which is a random variable defined as a receipt minus a shipment of a process, in the system. The random variable, LOPu, is defined for evaluating the non-detection probability at each Key Measurement Point (KMP) as the original Pu mass minus the Pu mass after a missing scenario. A number of assemblies for the LOPu to be 8 kg is considered in this calculation. The probability of detection for the 8 kg LOPu is evaluated with respect the size of granule and powder using the event tree analysis and the hypothesis testing method. We can observe there are possible cases showing the probability of detection for the 8 kg LOPu less than 95%. In order to enhance the detection rate, a new Material Balance Area (MBA) model is defined for the key-pyroprocess. The probabilities of detection for all spent fuel types based on the new MBA model are greater than 99%. Furthermore, it is observed that the probability of detection significantly increases by increasing granule sample sizes to evaluate the Pu-to-244Cm-ratio before the key-pyroprocess. Based on these observations, even though the Pu material accountability in pyroprocess is affected by the non-uniformity of nuclide composition when the Pu-to-244Cm-ratio method is being applied, that is surmounted by decreasing the uncertainty of measured ratio by increasing sample sizes and modifying the MBAs and KMPs. (Abstract shortened by ProQuest.).
Nerolidol effects on mitochondrial and cellular energetics.
Ferreira, Fernanda M; Palmeira, Carlos M; Oliveira, Maria M; Santos, Dario; Simões, Anabela M; Rocha, Sílvia M; Coimbra, Manuel A; Peixoto, Francisco
2012-03-01
In the present work, we evaluated the potential toxic effects of nerolidol, a sesquiterpenoid common in plants essential oils, both on mitochondrial and cellular energetics. Samples of enriched natural extracts of nerolidol (a racemic mixture of cis and trans isomers) were tested on rat liver mitochondria and a decrease in phosphorylative system was observed but not in the mitochondrial respiratory chain activity, which reflects a direct effect on F1-ATPase. Hence, respiratory control ratio was also decreased. Cellular ATP/ADP levels were significantly decreased in a concentration-dependent manner, possibly due to the direct effect of nerolidol on F(0)F(1)-ATPsynthase. Nerolidol stimulates respiratory activity probably due to an unspecific effect, since it does not show any protonophoric effect. Furthermore, we observed that mitochondrial permeability transition was delayed in the presence of nerolidol, possibly due to its antioxidant activity and because this compound decreases mitochondrial transmembrane electric potential. Our results also show that, in human hepatocellular liver carcinoma cell line (HepG2), nerolidol both induces cell death and arrests cell growth, probably related with the observed lower bioenergetic efficiency. Copyright © 2011 Elsevier Ltd. All rights reserved.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Deterministic versus evidence-based attitude towards clinical diagnosis.
Soltani, Akbar; Moayyeri, Alireza
2007-08-01
Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.
Bayesian Parameter Estimation for Heavy-Duty Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Eric; Konan, Arnaud; Duran, Adam
2017-03-28
Accurate vehicle parameters are valuable for design, modeling, and reporting. Estimating vehicle parameters can be a very time-consuming process requiring tightly-controlled experimentation. This work describes a method to estimate vehicle parameters such as mass, coefficient of drag/frontal area, and rolling resistance using data logged during standard vehicle operation. The method uses Monte Carlo to generate parameter sets which is fed to a variant of the road load equation. Modeled road load is then compared to measured load to evaluate the probability of the parameter set. Acceptance of a proposed parameter set is determined using the probability ratio to the currentmore » state, so that the chain history will give a distribution of parameter sets. Compared to a single value, a distribution of possible values provides information on the quality of estimates and the range of possible parameter values. The method is demonstrated by estimating dynamometer parameters. Results confirm the method's ability to estimate reasonable parameter sets, and indicates an opportunity to increase the certainty of estimates through careful selection or generation of the test drive cycle.« less
Efficient Posterior Probability Mapping Using Savage-Dickey Ratios
Penny, William D.; Ridgway, Gerard R.
2013-01-01
Statistical Parametric Mapping (SPM) is the dominant paradigm for mass-univariate analysis of neuroimaging data. More recently, a Bayesian approach termed Posterior Probability Mapping (PPM) has been proposed as an alternative. PPM offers two advantages: (i) inferences can be made about effect size thus lending a precise physiological meaning to activated regions, (ii) regions can be declared inactive. This latter facility is most parsimoniously provided by PPMs based on Bayesian model comparisons. To date these comparisons have been implemented by an Independent Model Optimization (IMO) procedure which separately fits null and alternative models. This paper proposes a more computationally efficient procedure based on Savage-Dickey approximations to the Bayes factor, and Taylor-series approximations to the voxel-wise posterior covariance matrices. Simulations show the accuracy of this Savage-Dickey-Taylor (SDT) method to be comparable to that of IMO. Results on fMRI data show excellent agreement between SDT and IMO for second-level models, and reasonable agreement for first-level models. This Savage-Dickey test is a Bayesian analogue of the classical SPM-F and allows users to implement model comparison in a truly interactive manner. PMID:23533640
Shinohara, Satoshi; Uchida, Yuzo; Kasai, Mayuko; Sunami, Rei
2017-08-01
To assess whether the high soluble fms-like tyrosine kinase-1 (sFlt-1) to placental growth factor (PlGF) ratio is associated with adverse outcomes (e.g., HELLP syndrome [hemolysis, elevated liver enzymes, and low platelets], severe hypertension uncontrolled by medication, non-reassuring fetal status, placental abruption, pulmonary edema, growth arrest, maternal death, or fetal death) and a shorter duration to delivery in early-onset fetal growth restriction (FGR). Thirty-four women with FGR diagnosed at <34.0 weeks were recruited. Serum angiogenic marker levels were estimated within 6 hours of a diagnosis of FGR. A receiver operating characteristic curve was used to determine the threshold of the sFlt-1/PlGF ratio to predict adverse outcomes. We used multivariable logistic regression analysis to examine the association between the sFlt-1/PlGF ratio and adverse outcomes. Finally, we used Kaplan-Meier analysis and the log-rank test to assess the probability of delay in delivery. Women who developed adverse outcomes within a week had a significantly higher sFlt-1/PlGF ratio than did those who did not develop complications. A cutoff value of 86.2 for the sFlt-1/PlGF ratio predicted adverse outcomes, with a sensitivity and specificity of 77.8% and 80.0%, respectively. Moreover, 58.4% of women with an sFlt-1/PlGF ratio ≥86.2 versus 9.1% of those with an sFlt-1/PlGF ratio <86.2 delivered within a week of presentation (p < 0.001). In multivariate analyses, an sFlt-1/PlGF ratio ≥86.2 (adjusted odds ratio 9.52; 95% confidence interval, 1.25-72.8) was associated with adverse maternal and neonatal outcomes. A high sFlt-1/PlGF ratio was associated with adverse outcomes and a shorter duration to delivery in early-onset FGR.
Fixation probability on clique-based graphs
NASA Astrophysics Data System (ADS)
Choi, Jeong-Ok; Yu, Unjong
2018-02-01
The fixation probability of a mutant in the evolutionary dynamics of Moran process is calculated by the Monte-Carlo method on a few families of clique-based graphs. It is shown that the complete suppression of fixation can be realized with the generalized clique-wheel graph in the limit of small wheel-clique ratio and infinite size. The family of clique-star is an amplifier, and clique-arms graph changes from amplifier to suppressor as the fitness of the mutant increases. We demonstrate that the overall structure of a graph can be more important to determine the fixation probability than the degree or the heat heterogeneity. The dependence of the fixation probability on the position of the first mutant is discussed.
Simulation Model of Mobile Detection Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmunds, T; Faissol, D; Yao, Y
2009-01-27
In this paper, we consider a mobile source that we attempt to detect with man-portable, vehicle-mounted or boat-mounted radiation detectors. The source is assumed to transit an area populated with these mobile detectors, and the objective is to detect the source before it reaches a perimeter. We describe a simulation model developed to estimate the probability that one of the mobile detectors will come in to close proximity of the moving source and detect it. We illustrate with a maritime simulation example. Our simulation takes place in a 10 km by 5 km rectangular bay patrolled by boats equipped withmore » 2-inch x 4-inch x 16-inch NaI detectors. Boats to be inspected enter the bay and randomly proceed to one of seven harbors on the shore. A source-bearing boat enters the mouth of the bay and proceeds to a pier on the opposite side. We wish to determine the probability that the source is detected and its range from target when detected. Patrol boats select the nearest in-bound boat for inspection and initiate an intercept course. Once within an operational range for the detection system, a detection algorithm is started. If the patrol boat confirms the source is not present, it selects the next nearest boat for inspection. Each run of the simulation ends either when a patrol successfully detects a source or when the source reaches its target. Several statistical detection algorithms have been implemented in the simulation model. First, a simple k-sigma algorithm, which alarms with the counts in a time window exceeds the mean background plus k times the standard deviation of background, is available to the user. The time window used is optimized with respect to the signal-to-background ratio for that range and relative speed. Second, a sequential probability ratio test [Wald 1947] is available, and configured in this simulation with a target false positive probability of 0.001 and false negative probability of 0.1. This test is utilized when the mobile detector maintains a constant range to the vessel being inspected. Finally, a variation of the sequential probability ratio test that is more appropriate when source and detector are not at constant range is available [Nelson 2005]. Each patrol boat in the fleet can be assigned a particular zone of the bay, or all boats can be assigned to monitor the entire bay. Boats assigned to a zone will only intercept and inspect other boats when they enter their zone. In our example simulation, each of two patrol boats operate in a 5 km by 5 km zone. Other parameters for this example include: (1) Detection range - 15 m range maintained between patrol boat and inspected boat; (2) Inbound boat arrival rate - Poisson process with mean arrival rate of 30 boats per hour; (3) Speed of boats to be inspected - Random between 4.5 and 9 knots; (4) Patrol boat speed - 10 knots; (5) Number of detectors per patrol boat - 4-2-inch x 4-inch x 16-inch NaI detectors; (6) Background radiation - 40 counts/sec per detector; and (7) Detector response due to radiation source at 1 meter - 1,589 counts/sec per detector. Simulation results indicate that two patrol boats are able to detect the source 81% of the time without zones and 90% of the time with zones. The average distances between the source and target at the end of the simulation is 5,866 km and 5,712 km for non-zoned and zoned patrols, respectively. Of those that did not reach the target, the average distance to the target is 7,305 km and 6,441 km respectively. Note that a design trade-off exists. While zoned patrols provide a higher probability of detection, the nonzoned patrols tend to detect the source farther from its target. Figure 1 displays the location of the source at the end of 1,000 simulations for the 5 x 10 km bay simulation. The simulation model and analysis described here can be used to determine the number of mobile detectors one would need to deploy in order to have a have reasonable chance of detecting a source in transit. By fixing the source speed to zero, the same model could be used to estimate how long it would take to detect a stationary source. For example, the model could predict how long it would take plant staff performing assigned duties carrying dosimeters to discover a contaminated spot in the facility.« less
Patriarca, Peter A; Van Auken, R Michael; Kebschull, Scott A
2018-01-01
Benefit-risk evaluations of drugs have been conducted since the introduction of modern regulatory systems more than 50 years ago. Such judgments are typically made on the basis of qualitative or semiquantitative approaches, often without the aid of quantitative assessment methods, the latter having often been applied asymmetrically to place emphasis on benefit more so than harm. In an effort to preliminarily evaluate the utility of lives lost or saved, or quality-adjusted life-years (QALY) lost and gained as a means of quantitatively assessing the potential benefits and risks of a new chemical entity, we focused our attention on the unique scenario in which a drug was initially approved based on one set of data, but later withdrawn from the market based on a second set of data. In this analysis, a dimensionless risk to benefit ratio was calculated in each instance, based on the risk and benefit quantified in similar units. The results indicated that FDA decisions to approve the drug corresponded to risk to benefit ratios less than or equal to 0.136, and that decisions to withdraw the drug from the US market corresponded to risk to benefit ratios greater than or equal to 0.092. The probability of FDA approval was then estimated using logistic regression analysis. The results of this analysis indicated that there was a 50% probability of FDA approval if the risk to benefit ratio was 0.121, and that the probability approaches 100% for values much less than 0.121, and the probability approaches 0% for values much greater than 0.121. The large uncertainty in these estimates due to the small sample size and overlapping data may be addressed in the future by applying the methodology to other drugs.
NASA Astrophysics Data System (ADS)
Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk
2015-06-01
The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.
Probability of spacesuit-induced fingernail trauma is associated with hand circumference.
Opperman, Roedolph A; Waldie, James M A; Natapoff, Alan; Newman, Dava J; Jones, Jeffrey A
2010-10-01
A significant number of astronauts sustain hand injuries during extravehicular activity training and operations. These hand injuries have been known to cause fingernail delamination (onycholysis) that requires medical intervention. This study investigated correlations between the anthropometrics of the hand and susceptibility to injury. The analysis explored the hypothesis that crewmembers with a high finger-to-hand size ratio are more likely to experience injuries. A database of 232 crewmembers' injury records and anthropometrics was sourced from NASA Johnson Space Center. No significant effect of finger-to-hand size was found on the probability of injury, but circumference and width of the metacarpophalangeal (MCP) joint were found to be significantly associated with injuries by the Kruskal-Wallis test. A multivariate logistic regression showed that hand circumference is the dominant effect on the likelihood of onycholysis. Male crewmembers with a hand circumference > 22.86 cm (9") have a 19.6% probability of finger injury, but those with hand circumferences < or = 22.86 cm (9") only have a 5.6% chance of injury. Findings were similar for female crewmembers. This increased probability may be due to constriction at large MCP joints by the current NASA Phase VI glove. Constriction may lead to occlusion of vascular flow to the fingers that may increase the chances of onycholysis. Injury rates are lower on gloves such as the superseded series 4000 and the Russian Orlan that provide more volume for the MCP joint. This suggests that we can reduce onycholysis by modifying the design of the current gloves at the MCP joint.
Factors Influencing Ball-Player Impact Probability in Youth Baseball
Matta, Philip A.; Myers, Joseph B.; Sawicki, Gregory S.
2015-01-01
Background: Altering the weight of baseballs for youth play has been studied out of concern for player safety. Research has shown that decreasing the weight of baseballs may limit the severity of both chronic arm and collision injuries. Unfortunately, reducing the weight of the ball also increases its exit velocity, leaving pitchers and nonpitchers with less time to defend themselves. The purpose of this study was to examine impact probability for pitchers and nonpitchers. Hypothesis: Reducing the available time to respond by 10% (expected from reducing ball weight from 142 g to 113 g) would increase impact probability for pitchers and nonpitchers, and players’ mean simple response time would be a primary predictor of impact probability for all participants. Study Design: Nineteen subjects between the ages of 9 and 13 years performed 3 experiments in a controlled laboratory setting: a simple response time test, an avoidance response time test, and a pitching response time test. Methods: Each subject performed these tests in order. The simple reaction time test tested the subjects’ mean simple response time, the avoidance reaction time test tested the subjects’ ability to avoid a simulated batted ball as a fielder, and the pitching reaction time test tested the subjects’ ability to avoid a simulated batted ball as a pitcher. Results: Reducing the weight of a standard baseball from 142 g to 113 g led to a less than 5% increase in impact probability for nonpitchers. However, the results indicate that the impact probability for pitchers could increase by more than 25%. Conclusion: Pitching may greatly increase the amount of time needed to react and defend oneself from a batted ball. Clinical Relevance: Impact injuries to youth baseball players may increase if a 113-g ball is used. PMID:25984261
Triple-wavelength lidar observations of the linear depolarization ratio of dried marine particles
NASA Astrophysics Data System (ADS)
Haarig, Moritz; Ansmann, Albert; Baars, Holger; Engelmann, Ronny; Althausen, Dietrich; Bohlmann, Stephanie; Gasteiger, Josef; Farrell, David
2018-04-01
For aerosol typing with lidar, sea salt particles are usually assumed to be spherical with a consequently low depolarization ratio. Evidence of dried marine particles at the top of the humid marine aerosol layer with a depolarization ratio up to 0.1 has been found at predominately maritime locations on Barbados and in the Southern Atlantic. The depolarization ratio for these probably cubic sea salt particles has been measured at three wavelengths (355, 532 and 1064 nm) simultaneously for the first time and compared to model simulations.
30+ New & Known SB2s in the SDSS-III/APOGEE M Dwarf Ancillary Science Project Sample
NASA Astrophysics Data System (ADS)
Skinner, Jacob; Covey, Kevin; Bender, Chad; De Lee, Nathan Michael; Chojnowski, Drew; Troup, Nicholas; Badenes, Carles; Mahadevan, Suvrath; Terrien, Ryan
2018-01-01
Close stellar binaries can drive dynamical interactions that affect the structure and evolution of planetary systems. Binary surveys indicate that the multiplicity fraction and typical orbital separation decrease with primary mass, but correlations with higher order architectural parameters such as the system's mass ratio are less well constrained. We seek to identify and characterize double-lined spectroscopic binaries (SB2s) among the 1350 M dwarf ancillary science targets with APOGEE spectra in the SDSS-III Data Release 13. We quantitatively measure the degree of asymmetry in the APOGEE pipeline cross-correlation functions (CCFs), and use those metrics to identify a sample of 44 high-likelihood candidate SB2s. Extracting radial velocities (RVs) for both binary components from the CCF, we then measure mass ratios for 31 SB2s; we also use Bayesian techniques to fit orbits for 4 systems with 8 or more distinct APOGEE observations. The (incomplete) mass ratio distribution of this sample rises quickly towards unity. Two-sided Kolmogorov-Smirnov (K-S) tests find probabilities of 13.8% and 14.2% that the M dwarf mass ratio distribution is consistent with those measured by Pourbaix et al. (2004) and Fernandez et al. (2017), respectively. The samples analyzed by Pourbaix et al. and Fernandez et al. are dominated by higher-mass solar type stars; this suggests that the mass ratio distribution of close binaries is not strongly dependent on primary mass.
Epstein, Richard H; Dexter, Franklin
2012-03-01
Anesthesia groups may wish to decrease the supervision ratio for nontrainee providers. Because hospitals offer many first-case starts and focus on starting these cases on time, the number of anesthesiologists needed is sensitive to this ratio. The number of operating rooms that an anesthesiologist can supervise concurrently is determined by the probability of multiple simultaneous critical portions of cases (i.e., requiring presence) and the availability of cross-coverage. A simulation study showed peak occurrence of critical portions during first cases, and frequent supervision lapses. These predictions were tested using real data from an anesthesia information management system. The timing and duration of critical portions of cases were determined from 1 yr of data at a tertiary care hospital. The percentages of days with at least one supervision lapse occurring at supervision ratios between 1:1 and 1:3 were determined. Even at a supervision ratio of 1:2, lapses occurred on 35% of days (lower 95% confidence limit = 30%). The peak incidence occurred before 8:00 AM, P < 0.0001 for the hypothesis that most (i.e., >50%) lapses occurred before this time. The average time from operating room entry until ready for prepping and draping (i.e., anesthesia release time) during first case starts was 22.2 min (95% confidence interval 21.8-22.8 min). Decreasing the supervision ratio from 1:2 to 1:3 has a large effect on supervision lapses during first-case starts. To mitigate such lapses, either staggered starts or additional anesthesiologists working at the start of the day would be required.
Probable interaction between an oral vitamin K antagonist and turmeric (Curcuma longa).
Daveluy, Amélie; Géniaux, Hélène; Thibaud, Lucile; Mallaret, Michel; Miremont-Salamé, Ghada; Haramburu, Françoise
2014-01-01
We report a probable interaction between a vitamin K antagonist, fluindione, and the herbal medicine turmeric that resulted in the elevation of the international normalized ratio (INR). The case presented here underlines the importance of considering potential exposure to herbal medications when assessing adverse effects. © 2014 Société Française de Pharmacologie et de Thérapeutique.
Concussion in professional football: animal model of brain injury--part 15.
Viano, David C; Hamberger, Anders; Bolouri, Hayde; Säljö, Annette
2009-06-01
A concussion model was developed to study injury mechanisms, functional effects, treatment, and recovery. Concussions in National Football League football involve high-impact velocity (7.4-11.2 m/s) and rapid change in head velocity (DeltaV) (5.4-9.0 m/s). Current animal models do not simulate these head impact conditions. One hundred eight adult male Wistar rats weighing 280 to 350 g were used in ballistic impacts simulating 3 collision severities causing National Football League-type concussion. Pneumatic pressure accelerated a 50 g impactor to velocities of 7.4, 9.3, and 11.2 m/s at the left side of the helmet-protected head. A thin layer of padding on the helmet controlled head acceleration, which was measured on the opposite side of the head, in line with the impact. Peak head acceleration, DeltaV, impact duration, and energy transfer were determined. Fifty-four animals were exposed to single impact, with 18 each having 1, 4, or 10 days of survival. Similar tests were conducted on another 54 animals, which received 3 impacts at 6-hour intervals. An additional 72 animals were tested with a 100g impactor to study more serious brain injuries. Brains were perfused, and surface injuries were identified. The 50 g impactor matches concussion conditions scaled to the rat. Impact velocity and head DeltaV were within 1% and 3% of targets on average. Head acceleration reached 450 g to 1750 g without skull fracture. The test is repeatable and robust. Gross pathology was observed in 11%, 28%, and 33% of animals in the 7.4-, 9.3-, and 11.2-m/s single impacts, respectively. At 7.4 m/s, a single diameter area of less than 0.5 mm of fine petechial hemorrhage occurred on the brain surface in the parenchyma and meninges nearest the point of impact. At higher velocities, there were larger areas of bleeding, sometimes with subdural hemorrhage. When the 50 g impactor tests were examined by logistic regression, greater energy transfer increased the probability of injury (odds ratio, 5.83; P = 0.01), as did 3 repeat impacts (odds ratio, 4.72; P = 0.002). The number of survival days decreased the probability of observing injury (odds ratio, 0.25 and 0.11 for 4 and 10 days, respectively, compared with 1 day). The 100g impactor produced more severe brain injuries. A concussion model was developed to simulate the high velocity of impact and rapid head DeltaV of concussions in National Football League players. The new procedure can be used to evaluate immediate and latent effects of concussion and more severe injury with greater impact mass.
Cost-effectiveness of external cephalic version for term breech presentation
2010-01-01
Background External cephalic version (ECV) is recommended by the American College of Obstetricians and Gynecologists to convert a breech fetus to vertex position and reduce the need for cesarean delivery. The goal of this study was to determine the incremental cost-effectiveness ratio, from society's perspective, of ECV compared to scheduled cesarean for term breech presentation. Methods A computer-based decision model (TreeAge Pro 2008, Tree Age Software, Inc.) was developed for a hypothetical base case parturient presenting with a term singleton breech fetus with no contraindications for vaginal delivery. The model incorporated actual hospital costs (e.g., $8,023 for cesarean and $5,581 for vaginal delivery), utilities to quantify health-related quality of life, and probabilities based on analysis of published literature of successful ECV trial, spontaneous reversion, mode of delivery, and need for unanticipated emergency cesarean delivery. The primary endpoint was the incremental cost-effectiveness ratio in dollars per quality-adjusted year of life gained. A threshold of $50,000 per quality-adjusted life-years (QALY) was used to determine cost-effectiveness. Results The incremental cost-effectiveness of ECV, assuming a baseline 58% success rate, equaled $7,900/QALY. If the estimated probability of successful ECV is less than 32%, then ECV costs more to society and has poorer QALYs for the patient. However, as the probability of successful ECV was between 32% and 63%, ECV cost more than cesarean delivery but with greater associated QALY such that the cost-effectiveness ratio was less than $50,000/QALY. If the probability of successful ECV was greater than 63%, the computer modeling indicated that a trial of ECV is less costly and with better QALYs than a scheduled cesarean. The cost-effectiveness of a trial of ECV is most sensitive to its probability of success, and not to the probabilities of a cesarean after ECV, spontaneous reversion to breech, successful second ECV trial, or adverse outcome from emergency cesarean. Conclusions From society's perspective, ECV trial is cost-effective when compared to a scheduled cesarean for breech presentation provided the probability of successful ECV is > 32%. Improved algorithms are needed to more precisely estimate the likelihood that a patient will have a successful ECV. PMID:20092630
Turtle, Lance; Kemp, Tim; Davies, Geraint R; Squire, S Bertie; Beeching, Nick J; Beadsworth, Michael B J
2012-06-01
to assess the usefulness of the T-SPOT.TB™ interferon-gamma release assay (IGRA), as used in a regional hospital infectious diseases unit in Northwest England, for the diagnosis of active tuberculosis. Retrospective case series. T-SPOT.TB™ test was applied to a group of 64 patients, 20 of whom had tuberculosis (mostly extra-pulmonary tuberculosis). The T-SPOT.TB™ test had a sensitivity of 83.3% and a specificity of 75% for the diagnosis of active tuberculosis, compared with culture. A positive IGRA approximately doubled the pre-test probability of disease from 0.23 to 0.5. This doubling of probability was true regardless of HIV status, though for HIV+ patients the sensitivity was lower (sensitivity 66.7%, post test probability 0.4 for a positive IGRA result). When extrapolated to the local population the test was most useful for exclusion of disease; post test probability 0.006 (or 1 in 167) for a negative IGRA result. Although it can add weight to a clinical diagnosis, T-SPOT.TB™ assay is not reliable for the diagnosis of active tuberculosis in a real world setting where the test is often used in patients with smear negative or extra-pulmonary disease. The test is useful for ruling out disease in HIV negative patients. Copyright © 2012. Published by Elsevier B.V.
Bone mineral density and nutritional status in children with quadriplegic cerebral palsy.
Alvarez Zaragoza, Citlalli; Vasquez Garibay, Edgar Manuel; García Contreras, Andrea A; Larrosa Haro, Alfredo; Romero Velarde, Enrique; Rea Rosas, Alejandro; Cabrales de Anda, José Luis; Vega Olea, Israel
2018-03-04
This study demonstrated the relationship of low bone mineral density (BMD) with the degree of motor impairment, method of feeding, anthropometric indicators, and malnutrition in children with quadriplegic cerebral palsy (CP). The control of these factors could optimize adequate bone mineralization, avoid the risk of osteoporosis, and would improve the quality of life. The purpose of the study is to explore the relationship between low BMD and nutritional status in children with quadriplegic CP. A cross-sectional analytical study included 59 participants aged 6 to 18 years with quadriplegic CP. Weight and height were obtained with alternative measurements, and weight/age, height/age, and BMI/age indexes were estimated. The BMD measurement obtained from the lumbar spine was expressed in grams per square centimeter and Z score (Z). Unpaired Student's t tests, chi-square tests, odds ratios, Pearson's correlations, and linear regressions were performed. The mean of BMD Z score was lower in adolescents than in school-aged children (p = 0.002). Patients with low BMD were at the most affected levels of the Gross Motor Function Classification System (GMFCS). Participants at level V of the GMFCS were more likely to have low BMD than levels III and IV [odds ratio (OR) = 5.8 (confidence interval [CI] 95% 1.4, 24.8), p = 0.010]. There was a higher probability of low BMD in tube-feeding patients [OR = 8.6 (CI 95% 1.0, 73.4), p = 0.023]. The probability of low BMD was higher in malnourished children with weight/age and BMI indices [OR = 11.4 (1.3, 94), p = 0.009] and [OR = 9.4 (CI 95% 1.1, 79.7), p = 0.017], respectively. There was a significant relationship between low BMD, degree of motor impairment, method of feeding, and malnutrition. Optimizing these factors could reduce the risk of osteopenia and osteoporosis and attain a significant improvement of quality of life in children with quadriplegic CP.
Mittal, Sahil; El-Serag, Hashem B.; Sada, Yvonne H.; Kanwal, Fasiha; Duan, Zhigang; Temple, Sarah; May, Sarah B.; Kramer, Jennifer R.; Richardson, Peter A.; Davila, Jessica A.
2015-01-01
Background & Aims Hepatocellular carcinoma (HCC) can develop in individuals without cirrhosis. We investigated risk factors for development of HCC in the absence of cirrhosis in a US population. Methods We identified a national cohort of 1500 patients with verified HCC during 2005–2010 in the US Veterans Administration (VA), and reviewed their full VA medical records for evidence of cirrhosis and risk factors for HCC. Patients without cirrhosis were assigned to categories of level 1 evidence for no cirrhosis (very high probability) or level 2 evidence for no cirrhosis (high probability), based on findings from histologic analyses, laboratory test results, markers of fibrosis from non-invasive tests, and imaging features. Results A total of 43 (2.9%) of the 1500 patients with HCC had level 1 evidence for no cirrhosis and 151 (10.1%) had level 2 evidence for no cirrhosis; the remaining 1203 patients (80.1%) had confirmed cirrhosis. Compared to patients with HCC in presence of cirrhosis, greater proportions of patients with HCC without evidence of cirrhosis had metabolic syndrome, non-alcoholic fatty liver disease (NAFLD), or no identifiable risk factors. Patients with HCC without evidence of cirrhosis were less likely to have abused alcohol or have HCV infection than patients with cirrhosis. Patients with HCC and NAFLD (unadjusted odds ratio, 5.4; 95% confidence interval, 3.4–8.5) or metabolic syndrome (unadjusted odds ratio, 5.0; 95% confidence interval, 3.1–7.8) had more than a 5-fold risk of having HCC in the absence of cirrhosis, compared to patients with HCV-related HCC. Conclusions Approximately 13% of patients with HCC in the VA system do not appear to have cirrhosis. NAFLD and metabolic syndrome are the main risk factors HCC in the absence of cirrhosis. PMID:26196445
Falvey, É C; King, E; Kinsella, S; Franklyn-Miller, A
2016-01-01
Background Athletic groin pain remains a common field-based team sports time-loss injury. There are few reports of non-surgically managed cohorts with athletic groin pain. Aim To describe clinical presentation/examination, MRI findings and patient-reported outcome (PRO) scores for an athletic groin pain cohort. Methods All patients had a history including demographics, injury duration, sport played and standardised clinical examination. All patients underwent MRI and PRO score to assess recovery. A clinical diagnosis of the injured anatomical structure was made based on these findings. Statistical assessment of the reliability of accepted standard investigations undertaken in making an anatomical diagnosis was performed. Result 382 consecutive athletic groin pain patients, all male, enrolled. Median time in pain at presentation was (IQR) 36 (16–75) weeks. Most (91%) played field-based ball-sports. Injury to the pubic aponeurosis (PA) 240 (62.8%) was the most common diagnosis. This was followed by injuries to the hip in 81 (21.2%) and adductors in 56 (14.7%) cases. The adductor squeeze test (90° hip flexion) was sensitive (85.4%) but not specific for the pubic aponeurosis and adductor pathology (negative likelihood ratio 1.95). Analysed in series, positive MRI findings and tenderness of the pubic aponeurosis had a 92.8% post-test probability. Conclusions In this largest cohort of patients with athletic groin pain combining clinical and MRI diagnostics there was a 63% prevalence of PA injury. The adductor squeeze test was sensitive for athletic groin pain, but not specific individual pathologies. MRI improved diagnostic post-test probability. No hernia or incipient hernia was diagnosed. Clinical trial registration number NCT02437942. PMID:26626272
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
Napoli, Anthony M
2014-04-01
Cardiology consensus guidelines recommend use of the Diamond and Forrester (D&F) score to augment the decision to pursue stress testing. However, recent work has reported no association between pretest probability of coronary artery disease (CAD) as measured by D&F and physician discretion in stress test utilization for inpatients. The author hypothesized that D&F pretest probability would predict the likelihood of acute coronary syndrome (ACS) and a positive stress test and that there would be limited yield to diagnostic testing of patients categorized as low pretest probability by D&F score who are admitted to a chest pain observation unit (CPU). This was a prospective observational cohort study of consecutively admitted CPU patients in a large-volume academic urban emergency department (ED). Cardiologists rounded on all patients and stress test utilization was driven by their recommendations. Inclusion criteria were as follows: age>18 years, American Heart Association (AHA) low/intermediate risk, nondynamic electrocardiograms (ECGs), and normal initial troponin I. Exclusion criteria were as follows: age older than 75 years with a history of CAD. A D&F score for likelihood of CAD was calculated on each patient independent of patient care. Based on the D&F score, patients were assigned a priori to low-, intermediate-, and high-risk groups (<10, 10 to 90, and >90%, respectively). ACS was defined by ischemia on stress test, coronary artery occlusion of ≥70% in at least one vessel, or elevations in troponin I consistent with consensus guidelines. A true-positive stress test was defined by evidence of reversible ischemia and subsequent angiographic evidence of critical stenosis or a discharge diagnosis of ACS. An estimated 3,500 patients would be necessary to have 1% precision around a potential 0.3% event rate in low-pretest-probability patients. Categorical comparisons were made using Pearson chi-square testing. A total of 3,552 patients with index visits were enrolled over a 29-month period. The mean (±standard deviation [SD]) age was 51.3 (±9.3) years. Forty-nine percent of patients received stress testing. Pretest probability based on D&F score was associated with stress test utilization (p<0.01), risk of ACS (p<0.01), and true-positive stress tests (p=0.03). No patients with low pretest probability were subsequently diagnosed with ACS (95% CI=0 to 0.66%) or had a true-positive stress test (95% CI=0 to 1.6%). Physician discretionary decision-making regarding stress test use is associated with pretest probability of CAD. However, based on the D&F score, low-pretest-probability patients who meet CPU admission criteria are very unlikely to have a true-positive stress test or eventually receive a diagnosis of ACS, such that observation and stress test utilization may be obviated. © 2014 by the Society for Academic Emergency Medicine.
Effectiveness of landslide risk mitigation strategies in Shihmen Watershed, Taiwan
NASA Astrophysics Data System (ADS)
Wu, Chun-Yi; Chen, Su-Chin
2015-04-01
The purpose of this study was to establish landslide risk analysis procedures that can be used to analyze landslide risk in a watershed scale and to assess the effectiveness of risk mitigation strategies. Landslide risk analysis encompassed the landslide hazard, the vulnerability of elements at risk, and community resilience capacity. First, landslide spatial probability, landslide temporal probability, and landslide area probability were joined to estimate the probability of landslides with an area exceeding a certain threshold in each slope unit. Second, the expected property and life losses were both analyzed in vulnerability analysis. Different elements at risk were assigned corresponding values, and then used in conjunction with the vulnerabilities to carry out quantitative analysis. Third, the resilience capacity of different communities was calculated based on the scores obtained through community checklists and the weights of individual items, including "the participation experience of disaster prevention drill," "real-time monitoring mechanism of community," "autonomous monitoring of residents," and "disaster prevention volunteer." Finally, the landslide probabilities, vulnerability analysis results, and resilience capacities were combined to assess landslide risk in Shihmen Watershed. In addition, the risks before and after the implementation of non-structural disaster prevention strategies were compared to determine the benefits of various strategies, and subsequently benefit-cost analysis was performed. Communities with high benefit-cost ratios included Hualing, Yisheng, Siouluan, and Gaoyi. The watershed as a whole had a benefit-cost ratio far greater than 1, indicating that the effectiveness of strategies was greater than the investment cost, and these measures were thus cost-effective. The results of factor sensitivity analysis revealed that changes in vulnerability and mortality rates would increase the uncertainty of risk, and that raise in annual interest rates or reduction in life cycle of measures would decrease the benefit-cost ratio. However, with regard to effectiveness analysis, these changes did not reverse the cost-effective inference.
[A method for forecasting the seasonal dynamic of malaria in the municipalities of Colombia].
Velásquez, Javier Oswaldo Rodríguez
2010-03-01
To develop a methodology for forecasting the seasonal dynamic of malaria outbreaks in the municipalities of Colombia. Epidemiologic ranges were defined by multiples of 50 cases for the six municipalities with the highest incidence, 25 cases for the municipalities that ranked 10th and 11th by incidence, 10 for the municipality that ranked 193rd, and 5 for the municipality that ranked 402nd. The specific probability values for each epidemiologic range appearing in each municipality, as well as the S/k value--the ratio between entropy (S) and the Boltzmann constant (k)--were calculated for each three-week set, along with the differences in this ratio divided by the consecutive sets of weeks. These mathematical ratios were used to determine the values for forecasting the case dynamic, which were compared with the actual epidemiologic data from the period 2003-2007. The probability of the epidemiologic ranges appearing ranged from 0.019 and 1.00, while the differences in the S/k ratio between the sets of consecutive weeks ranged from 0.23 to 0.29. Three ratios were established to determine whether the dynamic corresponded to an outbreak. These ratios were corroborated with real epidemiological data from 810 Colombian municipalities. This methodology allows us to forecast the malaria case dynamic and outbreaks in the municipalities of Colombia and can be used in planning interventions and public health policies.
Symmetry and the Golden Ratio in the Analysis of a Regular Pentagon
ERIC Educational Resources Information Center
Sparavigna, Amelia Carolina; Baldi, Mauro Maria
2017-01-01
The regular pentagon had a symbolic meaning in the Pythagorean and Platonic philosophies and a subsequent important role in Western thought, appearing also in arts and architecture. A property of regular pentagons, which was probably discovered by the Pythagoreans, is that the ratio between the diagonal and the side of these pentagons is equal to…
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
Reduced transition probabilities along the yrast line in 166W
NASA Astrophysics Data System (ADS)
Sayǧı, B.; Joss, D. T.; Page, R. D.; Grahn, T.; Simpson, J.; O'Donnell, D.; Alharshan, G.; Auranen, K.; Bäck, T.; Boening, S.; Braunroth, T.; Carroll, R. J.; Cederwall, B.; Cullen, D. M.; Dewald, A.; Doncel, M.; Donosa, L.; Drummond, M. C.; Ertuǧral, F.; Ertürk, S.; Fransen, C.; Greenlees, P. T.; Hackstein, M.; Hauschild, K.; Herzan, A.; Jakobsson, U.; Jones, P. M.; Julin, R.; Juutinen, S.; Konki, J.; Kröll, T.; Labiche, M.; Lopez-Martens, A.; McPeake, C. G.; Moradi, F.; Möller, O.; Mustafa, M.; Nieminen, P.; Pakarinen, J.; Partanen, J.; Peura, P.; Procter, M.; Rahkila, P.; Rother, W.; Ruotsalainen, P.; Sandzelius, M.; Sarén, J.; Scholey, C.; Sorri, J.; Stolze, S.; Taylor, M. J.; Thornthwaite, A.; Uusitalo, J.
2017-08-01
Lifetimes of excited states in the yrast band of the neutron-deficient nuclide 166W have been measured utilizing the DPUNS plunger device at the target position of the JUROGAM II γ -ray spectrometer in conjunction with the RITU gas-filled separator and the GREAT focal-plane spectrometer. Excited states in 166W were populated in the 92Mo(78Kr,4 p ) reaction at a bombarding energy of 380 MeV. The measurements reveal a low value for the ratio of reduced transitions probabilities for the lowest-lying transitions B (E 2 ;4+→2+) /B (E 2 ;2+→0+) =0.33 (5 ) , compared with the expected ratio for an axially deformed rotor (B4 /2 = 1.43).
Signal-processing analysis of the MC2823 radar fuze: an addendum concerning clutter effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jelinek, D.A.
1978-07-01
A detailed analysis of the signal processing of the MC2823 radar fuze was published by Thompson in 1976 which enabled the computation of dud probability versus signal-to-noise ratio where the noise was receiver noise. An addendum to Thompson's work was published by Williams in 1978 that modified the weighting function used by Thompson. The analysis presented herein extends the work of Thompson to include the effects of clutter (the non-signal portion of the echo from a terrain) using the new weighting function. This extension enables computation of dud probability versus signal-to-total-noise ratio where total noise is the sum of themore » receiver-noise power and the clutter power.« less
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
Winkler, Melissa M; Weis, Meike; Henzler, Claudia; Weiß, Christel; Kehl, Sven; Schoenberg, Stefan O; Neff, Wolfgang; Schaible, Thomas
2017-03-01
Background Our aim was to evaluate the prognostic value of magnetic resonance imaging (MRI)-based ratio of fetal lung volume (FLV) to fetal body volume (FBV) as a marker for development of chronic lung disease (CLD) in fetuses with congenital diaphragmatic hernia (CDH). Patients and Methods FLV and FBV were measured and the individual FLV/FBV ratio was calculated in 132 fetuses. Diagnosis of CLD was established following prespecified criteria and graded into mild/moderate/severe if present. Logistic regression analysis was used to calculate the probability of postnatal development of CLD in dependence of the FLV/FBV ratio. Receiver operating characteristic curves were analysed by calculating the area under the curve to evaluate the prognostic accuracy of this marker. Results 61 of 132 fetuses developed CLD (46.21%). The FLV/FBV ratio was significantly lower in fetuses with CLD (p=0.0008; AUC 0.743). Development of CLD was significantly associated with thoracic herniation of liver parenchyma (p<0.0001), requirement of extracorporal membrane oxygenation (ECMO) (p<0.0001) and gestational age at delivery (p=0.0052). Conclusion The MRI-based ratio of FLV to FBV is a highly valuable prenatal parameter for development of CLD. The ratio is helpful for early therapeutic decisions by estimating the probability to develop CLD. Perinatally, gestational age at delivery and ECMO requirement are useful additional parameters to further improve prediction of CLD. © Georg Thieme Verlag KG Stuttgart · New York.
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Taylor, Richard Andrew; Singh Gill, Harman; Marcolini, Evie G; Meyers, H Pendell; Faust, Jeremy Samuel; Newman, David H
2016-10-01
The objective was to determine the testing threshold for lumbar puncture (LP) in the evaluation of aneurysmal subarachnoid hemorrhage (SAH) after a negative head computed tomography (CT). As a secondary aim we sought to identify clinical variables that have the greatest impact on this threshold. A decision analytic model was developed to estimate the testing threshold for patients with normal neurologic findings, being evaluated for SAH, after a negative CT of the head. The testing threshold was calculated as the pretest probability of disease where the two strategies (LP or no LP) are balanced in terms of quality-adjusted life-years. Two-way and probabilistic sensitivity analyses (PSAs) were performed. For the base-case scenario the testing threshold for performing an LP after negative head CT was 4.3%. Results for the two-way sensitivity analyses demonstrated that the test threshold ranged from 1.9% to 15.6%, dominated by the uncertainty in the probability of death from initial missed SAH. In the PSA the mean testing threshold was 4.3% (95% confidence interval = 1.4% to 9.3%). Other significant variables in the model included probability of aneurysmal versus nonaneurysmal SAH after negative head CT, probability of long-term morbidity from initial missed SAH, and probability of renal failure from contrast-induced nephropathy. Our decision analysis results suggest a testing threshold for LP after negative CT to be approximately 4.3%, with a range of 1.4% to 9.3% on robust PSA. In light of these data, and considering the low probability of aneurysmal SAH after a negative CT, classical teaching and current guidelines addressing testing for SAH should be revisited. © 2016 by the Society for Academic Emergency Medicine.
NASA Technical Reports Server (NTRS)
Phoenix, S. Leigh; Kezirian, Michael T.; Murthy, Pappu L. N.
2009-01-01
Composite Overwrapped Pressure Vessels (COPVs) that have survived a long service time under pressure generally must be recertified before service is extended. Flight certification is dependent on the reliability analysis to quantify the risk of stress rupture failure in existing flight vessels. Full certification of this reliability model would require a statistically significant number of lifetime tests to be performed and is impractical given the cost and limited flight hardware for certification testing purposes. One approach to confirm the reliability model is to perform a stress rupture test on a flight COPV. Currently, testing of such a Kevlar49 (Dupont)/epoxy COPV is nearing completion. The present paper focuses on a Bayesian statistical approach to analyze the possible failure time results of this test and to assess the implications in choosing between possible model parameter values that in the past have had significant uncertainty. The key uncertain parameters in this case are the actual fiber stress ratio at operating pressure, and the Weibull shape parameter for lifetime; the former has been uncertain due to ambiguities in interpreting the original and a duplicate burst test. The latter has been uncertain due to major differences between COPVs in the database and the actual COPVs in service. Any information obtained that clarifies and eliminates uncertainty in these parameters will have a major effect on the predicted reliability of the service COPVs going forward. The key result is that the longer the vessel survives, the more likely the more optimistic stress ratio model is correct. At the time of writing, the resulting effect on predicted future reliability is dramatic, increasing it by about one "nine," that is, reducing the predicted probability of failure by an order of magnitude. However, testing one vessel does not change the uncertainty on the Weibull shape parameter for lifetime since testing several vessels would be necessary.
Investigating Gender Differences under Time Pressure in Financial Risk Taking
Xie, Zhixin; Page, Lionel; Hardy, Ben
2017-01-01
There is a significant gender imbalance on financial trading floors. This motivated us to investigate gender differences in financial risk taking under pressure. We used a well-established approach from behavior economics to analyze a series of risky monetary choices by male and female participants with and without time pressure. We also used second to fourth digit ratio (2D:4D) and face width-to-height ratio (fWHR) as correlates of pre-natal exposure to testosterone. We constructed a structural model and estimated the participants' risk attitudes and probability perceptions via maximum likelihood estimation under both expected utility (EU) and rank-dependent utility (RDU) models. In line with existing research, we found that male participants are less risk averse and that the gender gap in risk attitudes increases under moderate time pressure. We found that female participants with lower 2D:4D ratios and higher fWHR are less risk averse in RDU estimates. Males with lower 2D:4D ratios were less risk averse in EU estimations, but more risk averse using RDU estimates. We also observe that men whose ratios indicate a greater prenatal exposure to testosterone exhibit a greater optimism and overestimation of small probabilities of success. PMID:29326566
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isselhardt, Brett H.
2011-09-01
Resonance Ionization Mass Spectrometry (RIMS) has been developed as a method to measure relative uranium isotope abundances. In this approach, RIMS is used as an element-selective ionization process to provide a distinction between uranium atoms and potential isobars without the aid of chemical purification and separation. We explore the laser parameters critical to the ionization process and their effects on the measured isotope ratio. Specifically, the use of broad bandwidth lasers with automated feedback control of wavelength was applied to the measurement of 235U/ 238U ratios to decrease laser-induced isotopic fractionation. By broadening the bandwidth of the first laser inmore » a 3-color, 3-photon ionization process from a bandwidth of 1.8 GHz to about 10 GHz, the variation in sequential relative isotope abundance measurements decreased from >10% to less than 0.5%. This procedure was demonstrated for the direct interrogation of uranium oxide targets with essentially no sample preparation. A rate equation model for predicting the relative ionization probability has been developed to study the effect of variation in laser parameters on the measured isotope ratio. This work demonstrates that RIMS can be used for the robust measurement of uranium isotope ratios.« less
The reduced transition probabilities for excited states of rare-earths and actinide even-even nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghumman, S. S.
The theoretical B(E2) ratios have been calculated on DF, DR and Krutov models. A simple method based on the work of Arima and Iachello is used to calculate the reduced transition probabilities within SU(3) limit of IBA-I framework. The reduced E2 transition probabilities from second excited states of rare-earths and actinide even–even nuclei calculated from experimental energies and intensities from recent data, have been found to compare better with those calculated on the Krutov model and the SU(3) limit of IBA than the DR and DF models.
NASA Technical Reports Server (NTRS)
Berg, M. D.; Kim, H. S.; Friendlich, M. A.; Perez, C. E.; Seidlick, C. M.; LaBel, K. A.
2011-01-01
We present SEU test and analysis of the Microsemi ProASIC3 FPGA. SEU Probability models are incorporated for device evaluation. Included is a comparison to the RTAXS FPGA illustrating the effectiveness of the overall testing methodology.
Martin, Petra; Leighl, Natasha B
2017-06-01
This article considers the use of pretest probability in non-small cell lung cancer (NSCLC) and how its use in EGFR testing has helped establish clinical guidelines on selecting patients for EGFR testing. With an ever-increasing number of molecular abnormalities being identified and often limited tissue available for testing, the use of pretest probability will need to be increasingly considered in the future for selecting investigations and treatments in patients. In addition we review new mutations that have the potential to affect clinical practice.
Liberali, Jordana M; Reyna, Valerie F; Furlan, Sarah; Stein, Lilian M; Pardo, Seth T
2012-10-01
Despite evidence that individual differences in numeracy affect judgment and decision making, the precise mechanisms underlying how such differences produce biases and fallacies remain unclear. Numeracy scales have been developed without sufficient theoretical grounding, and their relation to other cognitive tasks that assess numerical reasoning, such as the Cognitive Reflection Test (CRT), has been debated. In studies conducted in Brazil and in the USA, we administered an objective Numeracy Scale (NS), Subjective Numeracy Scale (SNS), and the CRT to assess whether they measured similar constructs. The Rational-Experiential Inventory, inhibition (go/no-go task), and intelligence were also investigated. By examining factor solutions along with frequent errors for questions that loaded on each factor, we characterized different types of processing captured by different items on these scales. We also tested the predictive power of these factors to account for biases and fallacies in probability judgments. In the first study, 259 Brazilian undergraduates were tested on the conjunction and disjunction fallacies. In the second study, 190 American undergraduates responded to a ratio-bias task. Across the different samples, the results were remarkably similar. The results indicated that the CRT is not just another numeracy scale, that objective and subjective numeracy scales do not measure an identical construct, and that different aspects of numeracy predict different biases and fallacies. Dimensions of numeracy included computational skills such as multiplying, proportional reasoning, mindless or verbatim matching, metacognitive monitoring, and understanding the gist of relative magnitude, consistent with dual-process theories such as fuzzy-trace theory.
Risk Factors for Human Brucellosis in Northern Tanzania.
Cash-Goldwasser, Shama; Maze, Michael J; Rubach, Matthew P; Biggs, Holly M; Stoddard, Robyn A; Sharples, Katrina J; Halliday, Jo E B; Cleaveland, Sarah; Shand, Michael C; Mmbaga, Blandina T; Muiruri, Charles; Saganda, Wilbrod; Lwezaula, Bingileki F; Kazwala, Rudovick R; Maro, Venance P; Crump, John A
2018-02-01
Little is known about the epidemiology of human brucellosis in sub-Saharan Africa. This hampers prevention and control efforts at the individual and population levels. To evaluate risk factors for brucellosis in northern Tanzania, we conducted a study of patients presenting with fever to two hospitals in Moshi, Tanzania. Serum taken at enrollment and at 4-6 week follow-up was tested by Brucella microagglutination test. Among participants with a clinically compatible illness, confirmed brucellosis cases were defined as having a ≥ 4-fold rise in agglutination titer between paired sera or a blood culture positive for Brucella spp., and probable brucellosis cases were defined as having a single reciprocal titer ≥ 160. Controls had reciprocal titers < 20 in paired sera. We collected demographic and clinical information and administered a risk factor questionnaire. Of 562 participants in the analysis, 50 (8.9%) had confirmed or probable brucellosis. Multivariable analysis showed that risk factors for brucellosis included assisting goat or sheep births (Odds ratio [OR] 5.9, 95% confidence interval [CI] 1.4, 24.6) and having contact with cattle (OR 1.2, 95% CI 1.0, 1.4). Consuming boiled or pasteurized dairy products was protective against brucellosis (OR 0.12, 95% CI 0.02, 0.93). No participants received a clinical diagnosis of brucellosis from their healthcare providers. The under-recognition of brucellosis by healthcare workers could be addressed with clinician education and better access to brucellosis diagnostic tests. Interventions focused on protecting livestock keepers, especially those who assist goat or sheep births, are needed.
LIBERALI, JORDANA M.; REYNA, VALERIE F.; FURLAN, SARAH; STEIN, LILIAN M.; PARDO, SETH T.
2013-01-01
Despite evidence that individual differences in numeracy affect judgment and decision making, the precise mechanisms underlying how such differences produce biases and fallacies remain unclear. Numeracy scales have been developed without sufficient theoretical grounding, and their relation to other cognitive tasks that assess numerical reasoning, such as the Cognitive Reflection Test (CRT), has been debated. In studies conducted in Brazil and in the USA, we administered an objective Numeracy Scale (NS), Subjective Numeracy Scale (SNS), and the CRT to assess whether they measured similar constructs. The Rational–Experiential Inventory, inhibition (go/no-go task), and intelligence were also investigated. By examining factor solutions along with frequent errors for questions that loaded on each factor, we characterized different types of processing captured by different items on these scales. We also tested the predictive power of these factors to account for biases and fallacies in probability judgments. In the first study, 259 Brazilian undergraduates were tested on the conjunction and disjunction fallacies. In the second study, 190 American undergraduates responded to a ratio-bias task. Across the different samples, the results were remarkably similar. The results indicated that the CRT is not just another numeracy scale, that objective and subjective numeracy scales do not measure an identical construct, and that different aspects of numeracy predict different biases and fallacies. Dimensions of numeracy included computational skills such as multiplying, proportional reasoning, mindless or verbatim matching, metacognitive monitoring, and understanding the gist of relative magnitude, consistent with dual-process theories such as fuzzy-trace theory. PMID:23878413
Developing and Testing a Model to Predict Outcomes of Organizational Change
Gustafson, David H; Sainfort, François; Eichler, Mary; Adams, Laura; Bisognano, Maureen; Steudel, Harold
2003-01-01
Objective To test the effectiveness of a Bayesian model employing subjective probability estimates for predicting success and failure of health care improvement projects. Data Sources Experts' subjective assessment data for model development and independent retrospective data on 221 healthcare improvement projects in the United States, Canada, and the Netherlands collected between 1996 and 2000 for validation. Methods A panel of theoretical and practical experts and literature in organizational change were used to identify factors predicting the outcome of improvement efforts. A Bayesian model was developed to estimate probability of successful change using subjective estimates of likelihood ratios and prior odds elicited from the panel of experts. A subsequent retrospective empirical analysis of change efforts in 198 health care organizations was performed to validate the model. Logistic regression and ROC analysis were used to evaluate the model's performance using three alternative definitions of success. Data Collection For the model development, experts' subjective assessments were elicited using an integrative group process. For the validation study, a staff person intimately involved in each improvement project responded to a written survey asking questions about model factors and project outcomes. Results Logistic regression chi-square statistics and areas under the ROC curve demonstrated a high level of model performance in predicting success. Chi-square statistics were significant at the 0.001 level and areas under the ROC curve were greater than 0.84. Conclusions A subjective Bayesian model was effective in predicting the outcome of actual improvement projects. Additional prospective evaluations as well as testing the impact of this model as an intervention are warranted. PMID:12785571
The Unevenly Distributed Nearest Brown Dwarfs
NASA Astrophysics Data System (ADS)
Bihain, Gabriel; Scholz, Ralf-Dieter
2016-08-01
To address the questions of how many brown dwarfs there are in the Milky Way, how do these objects relate to star formation, and whether the brown dwarf formation rate was different in the past, the star-to-brown dwarf number ratio can be considered. While main sequence stars are well known components of the solar neighborhood, lower mass, substellar objects increasingly add to the census of the nearest objects. The sky projection of the known objects at <6.5 pc shows that stars present a uniform distribution and brown dwarfs a non-uniform distribution, with about four times more brown dwarfs behind than ahead of the Sun relative to the direction of rotation of the Galaxy. Assuming that substellar objects distribute uniformly, their observed configuration has a probability of 0.1 %. The helio- and geocentricity of the configuration suggests that it probably results from an observational bias, which if compensated for by future discoveries, would bring the star-to-brown dwarf ratio in agreement with the average ratio found in star forming regions.
Atmospheric conditions, lunar phases, and childbirth: a multivariate analysis
NASA Astrophysics Data System (ADS)
Ochiai, Angela Megumi; Gonçalves, Fabio Luiz Teixeira; Ambrizzi, Tercio; Florentino, Lucia Cristina; Wei, Chang Yi; Soares, Alda Valeria Neves; De Araujo, Natalucia Matos; Gualda, Dulce Maria Rosa
2012-07-01
Our objective was to assess extrinsic influences upon childbirth. In a cohort of 1,826 days containing 17,417 childbirths among them 13,252 spontaneous labor admissions, we studied the influence of environment upon the high incidence of labor (defined by 75th percentile or higher), analyzed by logistic regression. The predictors of high labor admission included increases in outdoor temperature (odds ratio: 1.742, P = 0.045, 95%CI: 1.011 to 3.001), and decreases in atmospheric pressure (odds ratio: 1.269, P = 0.029, 95%CI: 1.055 to 1.483). In contrast, increases in tidal range were associated with a lower probability of high admission (odds ratio: 0.762, P = 0.030, 95%CI: 0.515 to 0.999). Lunar phase was not a predictor of high labor admission ( P = 0.339). Using multivariate analysis, increases in temperature and decreases in atmospheric pressure predicted high labor admission, and increases of tidal range, as a measurement of the lunar gravitational force, predicted a lower probability of high admission.
Stationary echo canceling in velocity estimation by time-domain cross-correlation.
Jensen, J A
1993-01-01
The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated that the filtration results in a velocity-dependent degradation of the signal-to-noise ratio. An analytic expression is given for the degradation for a realistic pulse. The probability of correct detection at low signal-to-noise ratios is influenced by signal-to-noise ratio, transducer bandwidth, center frequency, number of samples in the range gate, and number of A-lines employed in the estimation. Quantitative results calculated by a simple simulation program are given for the variation in probability from these parameters. An index reflecting the reliability of the estimate at hand can be calculated from the actual cross-correlation estimate by a simple formula and used in rejecting poor estimates or in displaying the reliability of the velocity estimated.
Performance analysis of the word synchronization properties of the outer code in a TDRSS decoder
NASA Technical Reports Server (NTRS)
Costello, D. J., Jr.; Lin, S.
1984-01-01
A self-synchronizing coding scheme for NASA's TDRSS satellite system is a concatenation of a (2,1,7) inner convolutional code with a (255,223) Reed-Solomon outer code. Both symbol and word synchronization are achieved without requiring that any additional symbols be transmitted. An important parameter which determines the performance of the word sync procedure is the ratio of the decoding failure probability to the undetected error probability. Ideally, the former should be as small as possible compared to the latter when the error correcting capability of the code is exceeded. A computer simulation of a (255,223) Reed-Solomon code as carried out. Results for decoding failure probability and for undetected error probability are tabulated and compared.
Kind, Tobias; Fiehn, Oliver
2007-01-01
Background Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. Results An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. Conclusion The seven rules enable an automatic exclusion of molecular formulas which are either wrong or which contain unlikely high or low number of elements. The correct molecular formula is assigned with a probability of 98% if the formula exists in a compound database. For truly novel compounds that are not present in databases, the correct formula is found in the first three hits with a probability of 65–81%. Corresponding software and supplemental data are available for downloads from the authors' website. PMID:17389044
On the universality of knot probability ratios
NASA Astrophysics Data System (ADS)
Janse van Rensburg, E. J.; Rechnitzer, A.
2011-04-01
Let pn denote the number of self-avoiding polygons of length n on a regular three-dimensional lattice, and let pn(K) be the number which have knot type K. The probability that a random polygon of length n has knot type K is pn(K)/pn and is known to decay exponentially with length (Sumners and Whittington 1988 J. Phys. A: Math. Gen. 21 1689-94, Pippenger 1989 Discrete Appl. Math. 25 273-8). Little is known rigorously about the asymptotics of pn(K), but there is substantial numerical evidence (Orlandini et al 1988 J. Phys. A: Math. Gen. 31 5953-67, Marcone et al 2007 Phys. Rev. E 75 41105, Rawdon et al 2008 Macromolecules 41 4444-51, Janse van Rensburg and Rechnitzer 2008 J. Phys. A: Math. Theor. 41 105002) that pn(K) grows as p_n(K) \\simeq C_K \\mu _\\emptyset ^n n^{\\alpha -3+N_K}, \\qquad as\\quad n \\rightarrow \\infty, where NK is the number of prime components of the knot type K. It is believed that the entropic exponent, α, is universal, while the exponential growth rate, μ∅, is independent of the knot type but varies with the lattice. The amplitude, CK, depends on both the lattice and the knot type. The above asymptotic form implies that the relative probability of a random polygon of length n having prime knot type K over prime knot type L is \\frac{p_n(K)/p_n}{p_n(L)/p_n} = \\frac{p_n(K)}{p_n(L)} \\simeq \\left[ \\frac{C_K}{C_L} \\right].\\\\[-8pt] In the thermodynamic limit this probability ratio becomes an amplitude ratio; it should be universal and depend only on the knot types K and L. In this communication we examine the universality of these probability ratios for polygons in the simple cubic, face-centred cubic and body-centred cubic lattices. Our results support the hypothesis that these are universal quantities. For example, we estimate that a long random polygon is approximately 28 times more likely to be a trefoil than be a figure-eight, independent of the underlying lattice, giving an estimate of the intrinsic entropy associated with knot types in closed curves.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Properties of strong-coupling magneto-bipolaron qubit in quantum dot under magnetic field
NASA Astrophysics Data System (ADS)
Xu-Fang, Bai; Ying, Zhang; Wuyunqimuge; Eerdunchaolu
2016-07-01
Based on the variational method of Pekar type, we study the energies and the wave-functions of the ground and the first-excited states of magneto-bipolaron, which is strongly coupled to the LO phonon in a parabolic potential quantum dot under an applied magnetic field, thus built up a quantum dot magneto-bipolaron qubit. The results show that the oscillation period of the probability density of the two electrons in the qubit decreases with increasing electron-phonon coupling strength α, resonant frequency of the magnetic field ω c, confinement strength of the quantum dot ω 0, and dielectric constant ratio of the medium η the probability density of the two electrons in the qubit oscillates periodically with increasing time t, angular coordinate φ 2, and dielectric constant ratio of the medium η the probability of electron appearing near the center of the quantum dot is larger, and the probability of electron appearing away from the center of the quantum dot is much smaller. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. E2013407119) and the Items of Institution of Higher Education Scientific Research of Hebei Province and Inner Mongolia, China (Grant Nos. ZD20131008, Z2015149, Z2015219, and NJZY14189).
Gray, Joshua C; Amlung, Michael T; Palmer, Abraham A; MacKillop, James
2016-09-01
The 27-item Monetary Choice Questionnaire (MCQ; Kirby, Petry, & Bickel, 1999) and 30-item Probability Discounting Questionnaire (PDQ; Madden, Petry, & Johnson, 2009) are widely used, validated measures of preferences for immediate versus delayed rewards and guaranteed versus risky rewards, respectively. The MCQ measures delayed discounting by asking individuals to choose between rewards available immediately and larger rewards available after a delay. The PDQ measures probability discounting by asking individuals to choose between guaranteed rewards and a chance at winning larger rewards. Numerous studies have implicated these measures in addiction and other health behaviors. Unlike typical self-report measures, the MCQ and PDQ generate inferred hyperbolic temporal and probability discounting functions by comparing choice preferences to arrays of functions to which the individual items are preconfigured. This article provides R and SPSS syntax for processing the MCQ and PDQ. Specifically, for the MCQ, the syntax generates k values, consistency of the inferred k, and immediate choice ratios; for the PDQ, the syntax generates h indices, consistency of the inferred h, and risky choice ratios. The syntax is intended to increase the accessibility of these measures, expedite the data processing, and reduce risk for error. © 2016 Society for the Experimental Analysis of Behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarpitta, S.C.; Tu, K.W.; Fisenne, I.M.
1996-10-01
Results are presented from the Fifth Intercomparison of Active, Passive and Continuous Instruments for Radon and Radon Progeny Measurements conducted in the EML radon exposure and test facility in May 1996. In total, thirty-four government, private and academic facilities participated in the exercise with over 170 passive and electronic devices exposed in the EML test chamber. During the first week of the exercise, passive and continuous measuring devices were exposed (usually in quadruplicate) to about 1,280 Bq m{sup {minus}3} {sup 222}Rn for 1--7 days. Radon progeny measurements were made during the second week of the exercise. The results indicate thatmore » all of the tested devices that measure radon gas performed well and fulfill their intended purpose. The grand mean (GM) ratio of the participants` reported values to the EML values, for all four radon device categories, was 0.99 {plus_minus} 0.08. Eighty-five percent of all the radon measuring devices that were exposed in the EML radon test chamber were within {plus_minus}1 standard deviation (SD) of the EML reference values. For the most part, radon progeny measurements were also quite good as compared to the EML values. The GM ratio for the 10 continuous PAEC instruments was 0.90 {plus_minus} 0.12 with 75% of the devices within 1 SD of the EML reference values. Most of the continuous and integrating electronic instruments used for measuring the PAEC underestimated the EML values by about 10--15% probably because the concentration of particles onto which the radon progeny were attached was low (1,200--3,800 particles cm{sup {minus}3}). The equilibrium factor at that particle concentration level was 0.10--0.22.« less
Harris, Lynne T.; Koepsell, Thomas D.; Haneuse, Sebastien J.; Martin, Diane P.; Ralston, James D.
2013-01-01
OBJECTIVE To study differences in glycemic control and HbA1c testing associated with use of secure electronic patient-provider messaging. We hypothesized that messaging use would be associated with better glycemic control and a higher rate of adherence to HbA1c testing recommendations. RESEARCH DESIGN AND METHODS Retrospective observational study of secure messaging at Group Health, a large nonprofit health care system. Our analysis included adults with diabetes who had registered for access to a shared electronic medical record (SMR) between 2003 and 2006. We fit log-linear regression models, using generalized estimating equations, to estimate the adjusted rate ratio of meeting three indicators of glycemic control (HbA1c <7%, HbA1c <8%, and HbA1c >9%) and HbA1c testing adherence by level of previous messaging use. Multiple imputation and inverse probability weights were used to account for missing data. RESULTS During the study period, 6,301 adults with diabetes registered for access to the SMR. Of these individuals, 74% used messaging at least once during that time. Frequent use of messaging during the previous calendar quarter was associated with a higher rate of good glycemic control (HbA1c <7%: rate ratio, 1.26 [95% CI, 1.15–1.37]) and a higher rate testing adherence (1.20 [1.15–1.25]). CONCLUSIONS Among SMR users, recent and frequent messaging use was associated with better glycemic control and a higher rate of HbA1c testing adherence. These results suggest that secure messaging may facilitate important processes of care and help some patients to achieve or maintain adequate glycemic control. PMID:23628618
Human variability in mercury toxicokinetics and steady state biomarker ratios.
Bartell, S M; Ponce, R A; Sanga, R N; Faustman, E M
2000-10-01
Regulatory guidelines regarding methylmercury exposure depend on dose-response models relating observed mercury concentrations in maternal blood, cord blood, and maternal hair to developmental neurobehavioral endpoints. Generalized estimates of the maternal blood-to-hair, blood-to-intake, or hair-to-intake ratios are necessary for linking exposure to biomarker-based dose-response models. Most assessments have used point estimates for these ratios; however, significant interindividual and interstudy variability has been reported. For example, a maternal ratio of 250 ppm in hair per mg/L in blood is commonly used in models, but a 1990 WHO review reports mean ratios ranging from 140 to 370 ppm per mg/L. To account for interindividual and interstudy variation in applying these ratios to risk and safety assessment, some researchers have proposed representing the ratios with probability distributions and conducting probabilistic assessments. Such assessments would allow regulators to consider the range and like-lihood of mercury exposures in a population, rather than limiting the evaluation to an estimate of the average exposure or a single conservative exposure estimate. However, no consensus exists on the most appropriate distributions for representing these parameters. We discuss published reviews of blood-to-hair and blood-to-intake steady state ratios for mercury and suggest statistical approaches for combining existing datasets to form generalized probability distributions for mercury distribution ratios. Although generalized distributions may not be applicable to all populations, they allow a more informative assessment than point estimates where individual biokinetic information is unavailable. Whereas development and use of these distributions will improve existing exposure and risk models, additional efforts in data generation and model development are required.
Knuuti, Juhani; Ballo, Haitham; Juarez-Orozco, Luis Eduardo; Saraste, Antti; Kolh, Philippe; Rutjes, Anne Wilhelmina Saskia; Jüni, Peter; Windecker, Stephan; Bax, Jeroen J; Wijns, William
2018-05-29
To determine the ranges of pre-test probability (PTP) of coronary artery disease (CAD) in which stress electrocardiogram (ECG), stress echocardiography, coronary computed tomography angiography (CCTA), single-photon emission computed tomography (SPECT), positron emission tomography (PET), and cardiac magnetic resonance (CMR) can reclassify patients into a post-test probability that defines (>85%) or excludes (<15%) anatomically (defined by visual evaluation of invasive coronary angiography [ICA]) and functionally (defined by a fractional flow reserve [FFR] ≤0.8) significant CAD. A broad search in electronic databases until August 2017 was performed. Studies on the aforementioned techniques in >100 patients with stable CAD that utilized either ICA or ICA with FFR measurement as reference, were included. Study-level data was pooled using a hierarchical bivariate random-effects model and likelihood ratios were obtained for each technique. The PTP ranges for each technique to rule-in or rule-out significant CAD were defined. A total of 28 664 patients from 132 studies that used ICA as reference and 4131 from 23 studies using FFR, were analysed. Stress ECG can rule-in and rule-out anatomically significant CAD only when PTP is ≥80% (76-83) and ≤19% (15-25), respectively. Coronary computed tomography angiography is able to rule-in anatomic CAD at a PTP ≥58% (45-70) and rule-out at a PTP ≤80% (65-94). The corresponding PTP values for functionally significant CAD were ≥75% (67-83) and ≤57% (40-72) for CCTA, and ≥71% (59-81) and ≤27 (24-31) for ICA, demonstrating poorer performance of anatomic imaging against FFR. In contrast, functional imaging techniques (PET, stress CMR, and SPECT) are able to rule-in functionally significant CAD when PTP is ≥46-59% and rule-out when PTP is ≤34-57%. The various diagnostic modalities have different optimal performance ranges for the detection of anatomically and functionally significant CAD. Stress ECG appears to have very limited diagnostic power. The selection of a diagnostic technique for any given patient to rule-in or rule-out CAD should be based on the optimal PTP range for each test and on the assumed reference standard.
Designing Medical Tests: The Other Side of Bayes' Theorem
ERIC Educational Resources Information Center
Ross, Andrew M.
2012-01-01
To compute the probability of having a disease, given a positive test result, is a standard probability problem. The sensitivity and specificity of the test must be given and the prevalence of the disease. We ask how a test-maker might determine the tradeoff between sensitivity and specificity. Adding hypothetical costs for detecting or failing to…
Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun
2016-09-20
The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.
Real-world injury patterns associated with Hybrid III sternal deflections in frontal crash tests.
Brumbelow, Matthew L; Farmer, Charles M
2013-01-01
This study investigated the relationship between the peak sternal deflection measurements recorded by the Hybrid III 50th percentile male anthropometric test device (ATD) in frontal crash tests and injury and fatality outcomes for drivers in field crashes. ATD sternal deflection data were obtained from the Insurance Institute for Highway Safety's 64 km/h, 40 percent overlap crashworthiness evaluation tests for vehicles with seat belt crash tensioners, load limiters, and good-rated structure. The National Automotive Sampling System Crashworthiness Data System (NASS-CDS) was queried for frontal crashes of these vehicles in which the driver was restrained by a seat belt and air bag. Injury probability curves were calculated by frontal crash type using the injuries coded in NASS-CDS and peak ATD sternal deflection data. Fatality Analysis Reporting System (FARS) front-to-front crashes with exactly one driver death were also studied to determine whether the difference in measured sternal deflections for the 2 vehicles was related to the odds of fatality. For center impacts, moderate overlaps, and large overlaps in NASS-CDS, the probability of the driver sustaining an Abbreviated Injury Scale (AIS) score ≥ 3 thoracic injury, or any nonextremity AIS ≥ 3 injury, increased with increasing ATD sternal deflection measured in crash tests. For small overlaps, however, these probabilities decreased with increasing deflection. For FARS crashes, the fatally injured driver more often was in the vehicle with the lower measured deflection in crash tests (55 vs. 45%). After controlling for other factors, a 5-mm difference in measured sternal deflections between the 2 vehicles was associated with a fatality odds ratio of 0.762 for the driver in the vehicle with the greater deflection (95% confidence interval = 0.373, 1.449). Restraint systems that reduce peak Hybrid III sternal deflection in a moderate overlap crash test are beneficial in real-world crashes with similar or greater overlap but likely have a disbenefit in crashes with small overlap. This may occur because belt-force limiters employed to control deflections allow excursion that could produce contact with interior vehicle components in small overlaps, given the more oblique occupant motion and potential inboard movement of the air bag. Although based on a limited number of cases, this interpretation is supported by differences in skeletal fracture locations among drivers in crashes with different overlaps. Current restraint systems could be improved by designs that reduce sternal deflection in moderate and large overlap crashes without increasing occupant excursion in small overlap crashes.
{sigma}({chi}{sub c1})/{sigma}({chi}{sub c2}) ratio in the k{sub t}-factorization approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baranov, S. P.
2011-02-01
We address the puzzle of {sigma}({chi}{sub c1})/{sigma}({chi}{sub c2}) ratio at the collider and fixed-target experiments. We consider several factors that can affect the predicted ratio of the production rates. In particular, we discuss the effect of {chi}{sub cJ} polarization, the effect of including next-to-leading order contributions, and the effect of probably different {chi}{sub c1} and {chi}{sub c2} wave functions.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Use and interpretation of logistic regression in habitat-selection studies
Keating, Kim A.; Cherry, Steve
2004-01-01
Logistic regression is an important tool for wildlife habitat-selection studies, but the method frequently has been misapplied due to an inadequate understanding of the logistic model, its interpretation, and the influence of sampling design. To promote better use of this method, we review its application and interpretation under 3 sampling designs: random, case-control, and use-availability. Logistic regression is appropriate for habitat use-nonuse studies employing random sampling and can be used to directly model the conditional probability of use in such cases. Logistic regression also is appropriate for studies employing case-control sampling designs, but careful attention is required to interpret results correctly. Unless bias can be estimated or probability of use is small for all habitats, results of case-control studies should be interpreted as odds ratios, rather than probability of use or relative probability of use. When data are gathered under a use-availability design, logistic regression can be used to estimate approximate odds ratios if probability of use is small, at least on average. More generally, however, logistic regression is inappropriate for modeling habitat selection in use-availability studies. In particular, using logistic regression to fit the exponential model of Manly et al. (2002:100) does not guarantee maximum-likelihood estimates, valid probabilities, or valid likelihoods. We show that the resource selection function (RSF) commonly used for the exponential model is proportional to a logistic discriminant function. Thus, it may be used to rank habitats with respect to probability of use and to identify important habitat characteristics or their surrogates, but it is not guaranteed to be proportional to probability of use. Other problems associated with the exponential model also are discussed. We describe an alternative model based on Lancaster and Imbens (1996) that offers a method for estimating conditional probability of use in use-availability studies. Although promising, this model fails to converge to a unique solution in some important situations. Further work is needed to obtain a robust method that is broadly applicable to use-availability studies.
An automatic frequency control loop using overlapping DFTs (Discrete Fourier Transforms)
NASA Technical Reports Server (NTRS)
Aguirre, S.
1988-01-01
An automatic frequency control (AFC) loop is introduced and analyzed in detail. The new scheme is a generalization of the well known Cross Product AFC loop that uses running overlapping discrete Fourier transforms (DFTs) to create a discriminator curve. Linear analysis is included and supported with computer simulations. The algorithm is tested in a low carrier to noise ratio (CNR) dynamic environment, and the probability of loss of lock is estimated via computer simulations. The algorithm discussed is a suboptimum tracking scheme with a larger frequency error variance compared to an optimum strategy, but offers simplicity of implementation and a very low operating threshold CNR. This technique can be applied during the carrier acquisition and re-acquisition process in the Advanced Receiver.
A channel dynamics model for real-time flood forecasting
Hoos, Anne B.; Koussis, Antonis D.; Beale, Guy O.
1989-01-01
A new channel dynamics scheme (alternative system predictor in real time (ASPIRE)), designed specifically for real-time river flow forecasting, is introduced to reduce uncertainty in the forecast. ASPIRE is a storage routing model that limits the influence of catchment model forecast errors to the downstream station closest to the catchment. Comparisons with the Muskingum routing scheme in field tests suggest that the ASPIRE scheme can provide more accurate forecasts, probably because discharge observations are used to a maximum advantage and routing reaches (and model errors in each reach) are uncoupled. Using ASPIRE in conjunction with the Kalman filter did not improve forecast accuracy relative to a deterministic updating procedure. Theoretical analysis suggests that this is due to a large process noise to measurement noise ratio.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1995-10-17
A method and system for monitoring an industrial process and a sensor are disclosed. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.
System for monitoring an industrial process and determining sensor status
Gross, K.C.; Hoyer, K.K.; Humenik, K.E.
1997-05-13
A method and system are disclosed for monitoring an industrial process and a sensor. The method and system include generating a first and second signal characteristic of an industrial process variable. One of the signals can be an artificial signal generated by an auto regressive moving average technique. After obtaining two signals associated with one physical variable, a difference function is obtained by determining the arithmetic difference between the two pairs of signals over time. A frequency domain transformation is made of the difference function to obtain Fourier modes describing a composite function. A residual function is obtained by subtracting the composite function from the difference function and the residual function (free of nonwhite noise) is analyzed by a statistical probability ratio test. 17 figs.