Science.gov

Sample records for analysis increases accuracy

  1. Multivariate regional frequency analysis: Two new methods to increase the accuracy of measures

    NASA Astrophysics Data System (ADS)

    Abdi, Amin; Hassanzadeh, Yousef; Talatahari, Siamak; Fakheri-Fard, Ahmad; Mirabbasi, Rasoul; Ouarda, Taha B. M. J.

    2017-09-01

    The accurate detection of discordant sites in a heterogeneous region and the estimation of the regional parameters of a statistical distribution are two important issues in multivariate regional frequency analysis. In this study, two new methods are proposed for increasing the accuracy of the multivariate L-moment approach. The first one, the optimization-based method (OBM) is utilized to estimate the best distribution parameters. The second one is the rank-based method (RBM), which is used in the robust discordancy measure for identifying discordant sites. In order to assess the performance of the proposed approaches on the heterogeneity measure, real and simulated regions of drought characteristics are considered. The results confirm the usefulness of the new methods in comparison with some well-established techniques.

  2. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  3. Social Power Increases Interoceptive Accuracy

    PubMed Central

    Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk

    2017-01-01

    Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501

  4. Reporting Data with "Over-the-Counter" Data Analysis Supports Increases Educators' Analysis Accuracy

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant

    2013-01-01

    There is extensive research on the benefits of making data-informed decisions to improve learning, but these benefits rely on the data being effectively interpreted. Despite educators' above-average intellect and education levels, there is evidence many educators routinely misinterpret student data. Data analysis problems persist even at districts…

  5. Increasing Accuracy and Increasing Tension in Ho

    NASA Astrophysics Data System (ADS)

    Freedman, Wendy L.

    2017-01-01

    The Hubble Constant, Ho, provides a measure of the current expansion rate of the universe. In recent decades, there has been a huge increase in the accuracy with which extragalactic distances, and hence Ho, can be measured. While the historical factor-of-two uncertainty in Ho has been resolved, a new discrepancy has arisen between the values of Ho measured in the local universe, and that estimated from cosmic microwave background measurements, assuming a Lambda cold dark matter model. I will review the advances that have led to the increase in accuracy in measurements of Ho, as well as describe exciting future prospects with the James Webb Space Telescope (JWST) and Gaia, which will make it feasible to measure extragalactic distances at percent-level accuracy in the next decade.

  6. Post-transcriptional knowledge in pathway analysis increases the accuracy of phenotypes classification

    PubMed Central

    Alaimo, Salvatore; Giugno, Rosalba; Acunzo, Mario; Veneziano, Dario; Ferro, Alfredo; Pulvirenti, Alfredo

    2016-01-01

    Motivation Prediction of phenotypes from high-dimensional data is a crucial task in precision biology and medicine. Many technologies employ genomic biomarkers to characterize phenotypes. However, such elements are not sufficient to explain the underlying biology. To improve this, pathway analysis techniques have been proposed. Nevertheless, such methods have shown lack of accuracy in phenotypes classification. Results Here we propose a novel methodology called MITHrIL (Mirna enrIched paTHway Impact anaLysis) for the analysis of signaling pathways, which extends the work of Tarca et al., 2009. MITHrIL augments pathways with missing regulatory elements, such as microRNAs, and their interactions with genes. The method takes as input the expression values of genes and/or microRNAs and returns a list of pathways sorted according to their degree of deregulation, together with the corresponding statistical significance (p-values). Our analysis shows that MITHrIL outperforms its competitors even in the worst case. In addition, our method is able to correctly classify sets of tumor samples drawn from TCGA. Availability MITHrIL is freely available at the following URL: http://alpha.dmi.unict.it/mithril/ PMID:27275538

  7. Using a wearable camera to increase the accuracy of dietary analysis.

    PubMed

    O'Loughlin, Gillian; Cullen, Sarah Jane; McGoldrick, Adrian; O'Connor, Siobhan; Blain, Richard; O'Malley, Shane; Warrington, Giles D

    2013-03-01

    Food diaries are commonly used to assess individual dietary intake in both the general and sporting populations. Despite the widespread use of such diaries, evidence suggests that individuals' self-reported energy intake frequently and substantially underestimate true energy intake. To examine the use of the Microsoft SenseCam wearable camera to help more accurately report dietary intake within various sporting populations. In 2011, a total of 47 participants were recruited to take part in this study (17 trainee jockeys, 15 elite Gaelic footballers, and 15 healthy physically active university students). Participants wore a SenseCam for 1 day (from morning until night) while simultaneously keeping a 1-day food diary. Comparisons were made between the energy intake reported in the food diary alone and the food diary in conjunction with information gathered from the SenseCam. Data analysis was conducted in 2012. Mean total calorie intake using diary alone and diary and SenseCam were 2349±827.9 kcals vs 2631±893.4 kcal for the trainee jockeys; 2600±521.9 kcal vs 3191±770.2 kcal for the Gaelic footballers, and 2237±318.5 kcal vs 2487±404.6 kcal for the university students. This represented a difference of 10.7% (p≤0.001); 17.7% (p≤0.001); and 10.1% (p≤0.01) among measurement methods for trainee jockeys, Gaelic footballers, and university students, respectively. Results from this first-generation study suggest that a more accurate estimate of total energy intake is provided when combining the use of a conventional food diary and a SenseCam. Additional information on portion size, forgotten foods, leftovers, and brand names can be obtained by using this novel sensing technology in conjunction with the diary, with improved dietary assessment a potential outcome. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  9. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  10. Joint analysis of psychiatric disorders increases accuracy of risk prediction for schizophrenia, bipolar disorder, and major depressive disorder.

    PubMed

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Coryell, William; Potash, James B; Scheftner, William A; Shi, Jianxin; Weissman, Myrna M; Hultman, Christina M; Landén, Mikael; Levinson, Douglas F; Kendler, Kenneth S; Smoller, Jordan W; Wray, Naomi R; Lee, S Hong

    2015-02-05

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Joint Analysis of Psychiatric Disorders Increases Accuracy of Risk Prediction for Schizophrenia, Bipolar Disorder, and Major Depressive Disorder

    PubMed Central

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Absher, Devin; Agartz, Ingrid; Akil, Huda; Amin, Farooq; Andreassen, Ole A.; Anjorin, Adebayo; Anney, Richard; Arking, Dan E.; Asherson, Philip; Azevedo, Maria H.; Backlund, Lena; Badner, Judith A.; Bailey, Anthony J.; Banaschewski, Tobias; Barchas, Jack D.; Barnes, Michael R.; Barrett, Thomas B.; Bass, Nicholas; Battaglia, Agatino; Bauer, Michael; Bayés, Mònica; Bellivier, Frank; Bergen, Sarah E.; Berrettini, Wade; Betancur, Catalina; Bettecken, Thomas; Biederman, Joseph; Binder, Elisabeth B.; Black, Donald W.; Blackwood, Douglas H.R.; Bloss, Cinnamon S.; Boehnke, Michael; Boomsma, Dorret I.; Breen, Gerome; Breuer, René; Bruggeman, Richard; Buccola, Nancy G.; Buitelaar, Jan K.; Bunney, William E.; Buxbaum, Joseph D.; Byerley, William F.; Caesar, Sian; Cahn, Wiepke; Cantor, Rita M.; Casas, Miguel; Chakravarti, Aravinda; Chambert, Kimberly; Choudhury, Khalid; Cichon, Sven; Cloninger, C. Robert; Collier, David A.; Cook, Edwin H.; Coon, Hilary; Cormand, Bru; Cormican, Paul; Corvin, Aiden; Coryell, William H.; Craddock, Nicholas; Craig, David W.; Craig, Ian W.; Crosbie, Jennifer; Cuccaro, Michael L.; Curtis, David; Czamara, Darina; Daly, Mark J.; Datta, Susmita; Dawson, Geraldine; Day, Richard; De Geus, Eco J.; Degenhardt, Franziska; Devlin, Bernie; Djurovic, Srdjan; Donohoe, Gary J.; Doyle, Alysa E.; Duan, Jubao; Dudbridge, Frank; Duketis, Eftichia; Ebstein, Richard P.; Edenberg, Howard J.; Elia, Josephine; Ennis, Sean; Etain, Bruno; Fanous, Ayman; Faraone, Stephen V.; Farmer, Anne E.; Ferrier, I. Nicol; Flickinger, Matthew; Fombonne, Eric; Foroud, Tatiana; Frank, Josef; Franke, Barbara; Fraser, Christine; Freedman, Robert; Freimer, Nelson B.; Freitag, Christine M.; Friedl, Marion; Frisén, Louise; Gallagher, Louise; Gejman, Pablo V.; Georgieva, Lyudmila; Gershon, Elliot S.; Geschwind, Daniel H.; Giegling, Ina; Gill, Michael; Gordon, Scott D.; Gordon-Smith, Katherine; Green, Elaine K.; Greenwood, Tiffany A.; Grice, Dorothy E.; Gross, Magdalena; Grozeva, Detelina; Guan, Weihua; Gurling, Hugh; De Haan, Lieuwe; Haines, Jonathan L.; Hakonarson, Hakon; Hallmayer, Joachim; Hamilton, Steven P.; Hamshere, Marian L.; Hansen, Thomas F.; Hartmann, Annette M.; Hautzinger, Martin; Heath, Andrew C.; Henders, Anjali K.; Herms, Stefan; Hickie, Ian B.; Hipolito, Maria; Hoefels, Susanne; Holmans, Peter A.; Holsboer, Florian; Hoogendijk, Witte J.; Hottenga, Jouke-Jan; Hultman, Christina M.; Hus, Vanessa; Ingason, Andrés; Ising, Marcus; Jamain, Stéphane; Jones, Ian; Jones, Lisa; Kähler, Anna K.; Kahn, René S.; Kandaswamy, Radhika; Keller, Matthew C.; Kelsoe, John R.; Kendler, Kenneth S.; Kennedy, James L.; Kenny, Elaine; Kent, Lindsey; Kim, Yunjung; Kirov, George K.; Klauck, Sabine M.; Klei, Lambertus; Knowles, James A.; Kohli, Martin A.; Koller, Daniel L.; Konte, Bettina; Korszun, Ania; Krabbendam, Lydia; Krasucki, Robert; Kuntsi, Jonna; Kwan, Phoenix; Landén, Mikael; Långström, Niklas; Lathrop, Mark; Lawrence, Jacob; Lawson, William B.; Leboyer, Marion; Ledbetter, David H.; Lee, Phil H.; Lencz, Todd; Lesch, Klaus-Peter; Levinson, Douglas F.; Lewis, Cathryn M.; Li, Jun; Lichtenstein, Paul; Lieberman, Jeffrey A.; Lin, Dan-Yu; Linszen, Don H.; Liu, Chunyu; Lohoff, Falk W.; Loo, Sandra K.; Lord, Catherine; Lowe, Jennifer K.; Lucae, Susanne; MacIntyre, Donald J.; Madden, Pamela A.F.; Maestrini, Elena; Magnusson, Patrik K.E.; Mahon, Pamela B.; Maier, Wolfgang; Malhotra, Anil K.; Mane, Shrikant M.; Martin, Christa L.; Martin, Nicholas G.; Mattheisen, Manuel; Matthews, Keith; Mattingsdal, Morten; McCarroll, Steven A.; McGhee, Kevin A.; McGough, James J.; McGrath, Patrick J.; McGuffin, Peter; McInnis, Melvin G.; McIntosh, Andrew; McKinney, Rebecca; McLean, Alan W.; McMahon, Francis J.; McMahon, William M.; McQuillin, Andrew; Medeiros, Helena; Medland, Sarah E.; Meier, Sandra; Melle, Ingrid; Meng, Fan; Meyer, Jobst; Middeldorp, Christel M.; Middleton, Lefkos; Milanova, Vihra; Miranda, Ana; Monaco, Anthony P.; Montgomery, Grant W.; Moran, Jennifer L.; Moreno-De-Luca, Daniel; Morken, Gunnar; Morris, Derek W.; Morrow, Eric M.; Moskvina, Valentina; Mowry, Bryan J.; Muglia, Pierandrea; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Murtha, Michael; Myers, Richard M.; Myin-Germeys, Inez; Neale, Benjamin M.; Nelson, Stan F.; Nievergelt, Caroline M.; Nikolov, Ivan; Nimgaonkar, Vishwajit; Nolen, Willem A.; Nöthen, Markus M.; Nurnberger, John I.; Nwulia, Evaristus A.; Nyholt, Dale R.; O’Donovan, Michael C.; O’Dushlaine, Colm; Oades, Robert D.; Olincy, Ann; Oliveira, Guiomar; Olsen, Line; Ophoff, Roel A.; Osby, Urban; Owen, Michael J.; Palotie, Aarno; Parr, Jeremy R.; Paterson, Andrew D.; Pato, Carlos N.; Pato, Michele T.; Penninx, Brenda W.; Pergadia, Michele L.; Pericak-Vance, Margaret A.; Perlis, Roy H.; Pickard, Benjamin S.; Pimm, Jonathan; Piven, Joseph; Posthuma, Danielle; Potash, James B.; Poustka, Fritz; Propping, Peter; Purcell, Shaun M.; Puri, Vinay; Quested, Digby J.; Quinn, Emma M.; Ramos-Quiroga, Josep Antoni; Rasmussen, Henrik B.; Raychaudhuri, Soumya; Rehnström, Karola; Reif, Andreas; Ribasés, Marta; Rice, John P.; Rietschel, Marcella; Ripke, Stephan; Roeder, Kathryn; Roeyers, Herbert; Rossin, Lizzy; Rothenberger, Aribert; Rouleau, Guy; Ruderfer, Douglas; Rujescu, Dan; Sanders, Alan R.; Sanders, Stephan J.; Santangelo, Susan L.; Schachar, Russell; Schalling, Martin; Schatzberg, Alan F.; Scheftner, William A.; Schellenberg, Gerard D.; Scherer, Stephen W.; Schork, Nicholas J.; Schulze, Thomas G.; Schumacher, Johannes; Schwarz, Markus; Scolnick, Edward; Scott, Laura J.; Sergeant, Joseph A.; Shi, Jianxin; Shilling, Paul D.; Shyn, Stanley I.; Silverman, Jeremy M.; Sklar, Pamela; Slager, Susan L.; Smalley, Susan L.; Smit, Johannes H.; Smith, Erin N.; Smoller, Jordan W.; Sonuga-Barke, Edmund J.S.; St Clair, David; State, Matthew; Steffens, Michael; Steinhausen, Hans-Christoph; Strauss, John S.; Strohmaier, Jana; Stroup, T. Scott; Sullivan, Patrick F.; Sutcliffe, James; Szatmari, Peter; Szelinger, Szabocls; Thapar, Anita; Thirumalai, Srinivasa; Thompson, Robert C.; Todorov, Alexandre A.; Tozzi, Federica; Treutlein, Jens; Tzeng, Jung-Ying; Uhr, Manfred; van den Oord, Edwin J.C.G.; Van Grootheest, Gerard; Van Os, Jim; Vicente, Astrid M.; Vieland, Veronica J.; Vincent, John B.; Visscher, Peter M.; Walsh, Christopher A.; Wassink, Thomas H.; Watson, Stanley J.; Weiss, Lauren A.; Weissman, Myrna M.; Werge, Thomas; Wienker, Thomas F.; Wiersma, Durk; Wijsman, Ellen M.; Willemsen, Gonneke; Williams, Nigel; Willsey, A. Jeremy; Witt, Stephanie H.; Wray, Naomi R.; Xu, Wei; Young, Allan H.; Yu, Timothy W.; Zammit, Stanley; Zandi, Peter P.; Zhang, Peng; Zitman, Frans G.; Zöllner, Sebastian; Coryell, William; Potash, James B.; Scheftner, William A.; Shi, Jianxin; Weissman, Myrna M.; Hultman, Christina M.; Landén, Mikael; Levinson, Douglas F.; Kendler, Kenneth S.; Smoller, Jordan W.; Wray, Naomi R.; Lee, S. Hong

    2015-01-01

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk. PMID:25640677

  12. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  13. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  14. Increasing the Accuracy of Mammogram Interpretation.

    DTIC Science & Technology

    1997-10-01

    October 1997 Incre«■ing the Accuracy of Mammogxam 3. REPORT TV« AMD DATE* COWOT Annual (15 Sep 96 - 1A Sep 97) S. FUNDMO MUMRERS nAMDl7-» 4 ...to be made in Year 4 . The automated report writer was brought to a stage suitable for evaluation by a focus group of referring physicians in Year 4 ...ROC Analysis, screening and 16. NUMBER OF FADE 60 VrfecOlBTVcBH^AfioN^a. 8ECUWTY CtAa8*.CATlON Of THIS PAGE Ohelaaaifled 4 *. rUU cofle cC

  15. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  16. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  17. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  18. Increase in error threshold for quasispecies by heterogeneous replication accuracy

    NASA Astrophysics Data System (ADS)

    Aoki, Kazuhiro; Furusawa, Mitsuru

    2003-09-01

    In this paper we investigate the error threshold for quasispecies with heterogeneous replication accuracy. We show that the coexistence of error-free and error-prone polymerases can greatly increase the error threshold without a catastrophic loss of genetic information. We also show that the error threshold is influenced by the number of replicores. Our research suggests that quasispecies with heterogeneous replication accuracy can reduce the genetic cost of selective evolution while still producing a variety of mutants.

  19. Increasing shape modelling accuracy by adjusting for subject positioning: an application to the analysis of radiographic proximal femur symmetry using data from the Osteoarthritis Initiative.

    PubMed

    Lindner, C; Wallis, G A; Cootes, T F

    2014-04-01

    ) adjusting for subject positioning increases the accuracy of predicting the shape of the contra-lateral hip.

  20. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores.

    PubMed

    Vilhjálmsson, Bjarni J; Yang, Jian; Finucane, Hilary K; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E; Schierup, Mikkel H; De Jager, Philip; Patsopoulos, Nikolaos A; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M; Kraft, Peter; Patterson, Nick; Price, Alkes L

    2015-10-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R(2) increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  1. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores

    PubMed Central

    Vilhjálmsson, Bjarni J.; Yang, Jian; Finucane, Hilary K.; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Ripke, Stephan; Neale, Benjamin M.; Corvin, Aiden; Walters, James T.R.; Farh, Kai-How; Holmans, Peter A.; Lee, Phil; Bulik-Sullivan, Brendan; Collier, David A.; Huang, Hailiang; Pers, Tune H.; Agartz, Ingrid; Agerbo, Esben; Albus, Margot; Alexander, Madeline; Amin, Farooq; Bacanu, Silviu A.; Begemann, Martin; Belliveau, Richard A.; Bene, Judit; Bergen, Sarah E.; Bevilacqua, Elizabeth; Bigdeli, Tim B.; Black, Donald W.; Bruggeman, Richard; Buccola, Nancy G.; Buckner, Randy L.; Byerley, William; Cahn, Wiepke; Cai, Guiqing; Campion, Dominique; Cantor, Rita M.; Carr, Vaughan J.; Carrera, Noa; Catts, Stanley V.; Chambert, Kimberly D.; Chan, Raymond C.K.; Chen, Ronald Y.L.; Chen, Eric Y.H.; Cheng, Wei; Cheung, Eric F.C.; Chong, Siow Ann; Cloninger, C. Robert; Cohen, David; Cohen, Nadine; Cormican, Paul; Craddock, Nick; Crowley, James J.; Curtis, David; Davidson, Michael; Davis, Kenneth L.; Degenhardt, Franziska; Del Favero, Jurgen; DeLisi, Lynn E.; Demontis, Ditte; Dikeos, Dimitris; Dinan, Timothy; Djurovic, Srdjan; Donohoe, Gary; Drapeau, Elodie; Duan, Jubao; Dudbridge, Frank; Durmishi, Naser; Eichhammer, Peter; Eriksson, Johan; Escott-Price, Valentina; Essioux, Laurent; Fanous, Ayman H.; Farrell, Martilias S.; Frank, Josef; Franke, Lude; Freedman, Robert; Freimer, Nelson B.; Friedl, Marion; Friedman, Joseph I.; Fromer, Menachem; Genovese, Giulio; Georgieva, Lyudmila; Gershon, Elliot S.; Giegling, Ina; Giusti-Rodrguez, Paola; Godard, Stephanie; Goldstein, Jacqueline I.; Golimbet, Vera; Gopal, Srihari; Gratten, Jacob; Grove, Jakob; de Haan, Lieuwe; Hammer, Christian; Hamshere, Marian L.; Hansen, Mark; Hansen, Thomas; Haroutunian, Vahram; Hartmann, Annette M.; Henskens, Frans A.; Herms, Stefan; Hirschhorn, Joel N.; Hoffmann, Per; Hofman, Andrea; Hollegaard, Mads V.; Hougaard, David M.; Ikeda, Masashi; Joa, Inge; Julia, Antonio; Kahn, Rene S.; Kalaydjieva, Luba; Karachanak-Yankova, Sena; Karjalainen, Juha; Kavanagh, David; Keller, Matthew C.; Kelly, Brian J.; Kennedy, James L.; Khrunin, Andrey; Kim, Yunjung; Klovins, Janis; Knowles, James A.; Konte, Bettina; Kucinskas, Vaidutis; Kucinskiene, Zita Ausrele; Kuzelova-Ptackova, Hana; Kahler, Anna K.; Laurent, Claudine; Keong, Jimmy Lee Chee; Lee, S. Hong; Legge, Sophie E.; Lerer, Bernard; Li, Miaoxin; Li, Tao; Liang, Kung-Yee; Lieberman, Jeffrey; Limborska, Svetlana; Loughland, Carmel M.; Lubinski, Jan; Lnnqvist, Jouko; Macek, Milan; Magnusson, Patrik K.E.; Maher, Brion S.; Maier, Wolfgang; Mallet, Jacques; Marsal, Sara; Mattheisen, Manuel; Mattingsdal, Morten; McCarley, Robert W.; McDonald, Colm; McIntosh, Andrew M.; Meier, Sandra; Meijer, Carin J.; Melegh, Bela; Melle, Ingrid; Mesholam-Gately, Raquelle I.; Metspalu, Andres; Michie, Patricia T.; Milani, Lili; Milanova, Vihra; Mokrab, Younes; Morris, Derek W.; Mors, Ole; Mortensen, Preben B.; Murphy, Kieran C.; Murray, Robin M.; Myin-Germeys, Inez; Mller-Myhsok, Bertram; Nelis, Mari; Nenadic, Igor; Nertney, Deborah A.; Nestadt, Gerald; Nicodemus, Kristin K.; Nikitina-Zake, Liene; Nisenbaum, Laura; Nordin, Annelie; O’Callaghan, Eadbhard; O’Dushlaine, Colm; O’Neill, F. Anthony; Oh, Sang-Yun; Olincy, Ann; Olsen, Line; Van Os, Jim; Pantelis, Christos; Papadimitriou, George N.; Papiol, Sergi; Parkhomenko, Elena; Pato, Michele T.; Paunio, Tiina; Pejovic-Milovancevic, Milica; Perkins, Diana O.; Pietilinen, Olli; Pimm, Jonathan; Pocklington, Andrew J.; Powell, John; Price, Alkes; Pulver, Ann E.; Purcell, Shaun M.; Quested, Digby; Rasmussen, Henrik B.; Reichenberg, Abraham; Reimers, Mark A.; Richards, Alexander L.; Roffman, Joshua L.; Roussos, Panos; Ruderfer, Douglas M.; Salomaa, Veikko; Sanders, Alan R.; Schall, Ulrich; Schubert, Christian R.; Schulze, Thomas G.; Schwab, Sibylle G.; Scolnick, Edward M.; Scott, Rodney J.; Seidman, Larry J.; Shi, Jianxin; Sigurdsson, Engilbert; Silagadze, Teimuraz; Silverman, Jeremy M.; Sim, Kang; Slominsky, Petr; Smoller, Jordan W.; So, Hon-Cheong; Spencer, Chris C.A.; Stahl, Eli A.; Stefansson, Hreinn; Steinberg, Stacy; Stogmann, Elisabeth; Straub, Richard E.; Strengman, Eric; Strohmaier, Jana; Stroup, T. Scott; Subramaniam, Mythily; Suvisaari, Jaana; Svrakic, Dragan M.; Szatkiewicz, Jin P.; Sderman, Erik; Thirumalai, Srinivas; Toncheva, Draga; Tooney, Paul A.; Tosato, Sarah; Veijola, Juha; Waddington, John; Walsh, Dermot; Wang, Dai; Wang, Qiang; Webb, Bradley T.; Weiser, Mark; Wildenauer, Dieter B.; Williams, Nigel M.; Williams, Stephanie; Witt, Stephanie H.; Wolen, Aaron R.; Wong, Emily H.M.; Wormley, Brandon K.; Wu, Jing Qin; Xi, Hualin Simon; Zai, Clement C.; Zheng, Xuebin; Zimprich, Fritz; Wray, Naomi R.; Stefansson, Kari; Visscher, Peter M.; Adolfsson, Rolf; Andreassen, Ole A.; Blackwood, Douglas H.R.; Bramon, Elvira; Buxbaum, Joseph D.; Børglum, Anders D.; Cichon, Sven; Darvasi, Ariel; Domenici, Enrico; Ehrenreich, Hannelore; Esko, Tonu; Gejman, Pablo V.; Gill, Michael; Gurling, Hugh; Hultman, Christina M.; Iwata, Nakao; Jablensky, Assen V.; Jonsson, Erik G.; Kendler, Kenneth S.; Kirov, George; Knight, Jo; Lencz, Todd; Levinson, Douglas F.; Li, Qingqin S.; Liu, Jianjun; Malhotra, Anil K.; McCarroll, Steven A.; McQuillin, Andrew; Moran, Jennifer L.; Mortensen, Preben B.; Mowry, Bryan J.; Nthen, Markus M.; Ophoff, Roel A.; Owen, Michael J.; Palotie, Aarno; Pato, Carlos N.; Petryshen, Tracey L.; Posthuma, Danielle; Rietschel, Marcella; Riley, Brien P.; Rujescu, Dan; Sham, Pak C.; Sklar, Pamela; St. Clair, David; Weinberger, Daniel R.; Wendland, Jens R.; Werge, Thomas; Daly, Mark J.; Sullivan, Patrick F.; O’Donovan, Michael C.; Kraft, Peter; Hunter, David J.; Adank, Muriel; Ahsan, Habibul; Aittomäki, Kristiina; Baglietto, Laura; Berndt, Sonja; Blomquist, Carl; Canzian, Federico; Chang-Claude, Jenny; Chanock, Stephen J.; Crisponi, Laura; Czene, Kamila; Dahmen, Norbert; Silva, Isabel dos Santos; Easton, Douglas; Eliassen, A. Heather; Figueroa, Jonine; Fletcher, Olivia; Garcia-Closas, Montserrat; Gaudet, Mia M.; Gibson, Lorna; Haiman, Christopher A.; Hall, Per; Hazra, Aditi; Hein, Rebecca; Henderson, Brian E.; Hofman, Albert; Hopper, John L.; Irwanto, Astrid; Johansson, Mattias; Kaaks, Rudolf; Kibriya, Muhammad G.; Lichtner, Peter; Lindström, Sara; Liu, Jianjun; Lund, Eiliv; Makalic, Enes; Meindl, Alfons; Meijers-Heijboer, Hanne; Müller-Myhsok, Bertram; Muranen, Taru A.; Nevanlinna, Heli; Peeters, Petra H.; Peto, Julian; Prentice, Ross L.; Rahman, Nazneen; Sánchez, María José; Schmidt, Daniel F.; Schmutzler, Rita K.; Southey, Melissa C.; Tamimi, Rulla; Travis, Ruth; Turnbull, Clare; Uitterlinden, Andre G.; van der Luijt, Rob B.; Waisfisz, Quinten; Wang, Zhaoming; Whittemore, Alice S.; Yang, Rose; Zheng, Wei; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E.; Schierup, Mikkel H.; De Jager, Philip; Patsopoulos, Nikolaos A.; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M.; Kraft, Peter; Patterson, Nick; Price, Alkes L.

    2015-01-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R2 increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase. PMID:26430803

  2. Cooperation between Referees and Authors Increases Peer Review Accuracy

    PubMed Central

    Leek, Jeffrey T.; Taub, Margaret A.; Pineda, Fernando J.

    2011-01-01

    Peer review is fundamentally a cooperative process between scientists in a community who agree to review each other's work in an unbiased fashion. Peer review is the foundation for decisions concerning publication in journals, awarding of grants, and academic promotion. Here we perform a laboratory study of open and closed peer review based on an online game. We show that when reviewer behavior was made public under open review, reviewers were rewarded for refereeing and formed significantly more cooperative interactions (13% increase in cooperation, P = 0.018). We also show that referees and authors who participated in cooperative interactions had an 11% higher reviewing accuracy rate (P = 0.016). Our results suggest that increasing cooperation in the peer review process can lead to a decreased risk of reviewing errors. PMID:22096506

  3. Using Transponders on the Moon to Increase Accuracy of GPS

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Chui, Talso

    2008-01-01

    It has been proposed to place laser or radio transponders at suitably chosen locations on the Moon to increase the accuracy achievable using the Global Positioning System (GPS) or other satellite-based positioning system. The accuracy of GPS position measurements depends on the accuracy of determination of the ephemerides of the GPS satellites. These ephemerides are determined by means of ranging to and from Earth-based stations and consistency checks among the satellites. Unfortunately, ranging to and from Earth is subject to errors caused by atmospheric effects, notably including unpredictable variations in refraction. The proposal is based on exploitation of the fact that ranging between a GPS satellite and another object outside the atmosphere is not subject to error-inducing atmospheric effects. The Moon is such an object and is a convenient place for a ranging station. The ephemeris of the Moon is well known and, unlike a GPS satellite, the Moon is massive enough that its orbit is not measurably affected by the solar wind and solar radiation. According to the proposal, each GPS satellite would repeatedly send a short laser or radio pulse toward the Moon and the transponder(s) would respond by sending back a pulse and delay information. The GPS satellite could then compute its distance from the known position(s) of the transponder(s) on the Moon. Because the same hemisphere of the Moon faces the Earth continuously, any transponders placed there would remain continuously or nearly continuously accessible to GPS satellites, and so only a relatively small number of transponders would be needed to provide continuous coverage. Assuming that the transponders would depend on solar power, it would be desirable to use at least two transponders, placed at diametrically opposite points on the edges of the Moon disk as seen from Earth, so that all or most of the time, at least one of them would be in sunlight.

  4. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  5. Is it Possible to increase the Accuracy of Environmental Measurements?

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Sonobe, Jun

    2017-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which has resulted in substantial temperature increases. Many countries have entered into agreements to limit and / or decrease GHG emissions. This requires precise measurements by region to clearly evaluate GHG emissions, sinks and evolution as well as mitigation strategies. High precision measurements are a key requirement to study and evaluate the global carbon cycle and its effect on climate change. Calibrating the analytical instruments used to make atmospheric measurements are often done using standards prepared in synthetic air. There are significant differences between synthetic air and natural air which introduce bias into some measurement; therefore natural air is preferred. This presentation will examine the natural air and isotopic mixture preparation process and the role of precisely characterized materials, highlighting stability of isotopic mixtures in natural air. Emphasis will focus on adjustment of isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy

  6. EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection

    NASA Astrophysics Data System (ADS)

    Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.

    2017-07-01

    In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.

  7. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  8. Likable co-witnesses increase eyewitness accuracy and decrease suggestibility.

    PubMed

    Kieckhaefer, Jenna M; Wright, Daniel B

    2015-01-01

    This study examines the impact of likability on memory accuracy and memory conformity between two previously unacquainted individuals. After viewing a crime, eyewitnesses often talk to one another and may find each other likable or dislikable. One hundred twenty-seven undergraduate students arrived at the laboratory with an unknown confederate and were assigned to a likability condition (i.e., control, likable or dislikable). Together, the pair viewed pictures and was then tested on their memory for those pictures in such a way that the participant knew the confederate's response. Thus, the participant's response could be influenced both by his or her own memory and by the answers of the confederate. Participants in the likable condition were more accurate and less influenced by the confederate, compared with the other conditions. Results are discussed in relation to research that shows people are more influenced by friends than strangers and in relation to establishing positive rapport in forensic interviewing.

  9. Interspecies translation of disease networks increases robustness and predictive accuracy.

    PubMed

    Anvar, Seyed Yahya; Tucker, Allan; Vinciotti, Veronica; Venema, Andrea; van Ommen, Gert-Jan B; van der Maarel, Silvere M; Raz, Vered; 't Hoen, Peter A C

    2011-11-01

    Gene regulatory networks give important insights into the mechanisms underlying physiology and pathophysiology. The derivation of gene regulatory networks from high-throughput expression data via machine learning strategies is problematic as the reliability of these models is often compromised by limited and highly variable samples, heterogeneity in transcript isoforms, noise, and other artifacts. Here, we develop a novel algorithm, dubbed Dandelion, in which we construct and train intraspecies Bayesian networks that are translated and assessed on independent test sets from other species in a reiterative procedure. The interspecies disease networks are subjected to multi-layers of analysis and evaluation, leading to the identification of the most consistent relationships within the network structure. In this study, we demonstrate the performance of our algorithms on datasets from animal models of oculopharyngeal muscular dystrophy (OPMD) and patient materials. We show that the interspecies network of genes coding for the proteasome provide highly accurate predictions on gene expression levels and disease phenotype. Moreover, the cross-species translation increases the stability and robustness of these networks. Unlike existing modeling approaches, our algorithms do not require assumptions on notoriously difficult one-to-one mapping of protein orthologues or alternative transcripts and can deal with missing data. We show that the identified key components of the OPMD disease network can be confirmed in an unseen and independent disease model. This study presents a state-of-the-art strategy in constructing interspecies disease networks that provide crucial information on regulatory relationships among genes, leading to better understanding of the disease molecular mechanisms.

  10. Analysis of initial orbit determination accuracy

    NASA Astrophysics Data System (ADS)

    Vananti, Alessandro; Schildknecht, Thomas

    The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observations depending on the considered orbit.

  11. Are the surgeon's movements repeatable? An analysis of the feasibility and expediency of implementing support procedures guiding the surgical tools and increasing motion accuracy during the performance of stereotypical movements by the surgeon

    PubMed Central

    Podsędkowski, Leszek Robert; Moll, Jacek; Moll, Maciej

    2014-01-01

    The developments in surgical robotics suggest that it will be possible to entrust surgical robots with a wider range of tasks. So far, it has not been possible to automate the surgery procedures related to soft tissue. Thus, the objective of the conducted studies was to confirm the hypothesis that the surgery telemanipulator can be equipped with certain routines supporting the surgeon in leading the surgical tools and increasing motion accuracy during stereotypical movements. As the first step in facilitating the surgery, an algorithm will be developed which will concurrently provide automation and allow the surgeon to maintain full control over the slave robot. The algorithm will assist the surgeon in performing typical movement sequences. This kind of support must, however, be preceded by determining the reference points for accurately defining the position of the stitched tissue. It is in relation to these points that the tool's trajectory will be created, along which the master manipulator will guide the surgeon's hand. The paper presents the first stage, concerning the selection of movements for which the support algorithm will be used. The work also contains an analysis of surgical movement repeatability. The suturing movement was investigated in detail by experimental research in order to determine motion repeatability and verify the position of the stitched tissue. Tool trajectory was determined by a motion capture stereovision system. The study has demonstrated that the suturing movement could be considered as repeatable; however, the trajectories performed by different surgeons exhibit some individual characteristics. PMID:26336404

  12. Are the surgeon's movements repeatable? An analysis of the feasibility and expediency of implementing support procedures guiding the surgical tools and increasing motion accuracy during the performance of stereotypical movements by the surgeon.

    PubMed

    Podsędkowski, Leszek Robert; Moll, Jacek; Moll, Maciej; Frącczak, Łukasz

    2014-03-01

    The developments in surgical robotics suggest that it will be possible to entrust surgical robots with a wider range of tasks. So far, it has not been possible to automate the surgery procedures related to soft tissue. Thus, the objective of the conducted studies was to confirm the hypothesis that the surgery telemanipulator can be equipped with certain routines supporting the surgeon in leading the surgical tools and increasing motion accuracy during stereotypical movements. As the first step in facilitating the surgery, an algorithm will be developed which will concurrently provide automation and allow the surgeon to maintain full control over the slave robot. The algorithm will assist the surgeon in performing typical movement sequences. This kind of support must, however, be preceded by determining the reference points for accurately defining the position of the stitched tissue. It is in relation to these points that the tool's trajectory will be created, along which the master manipulator will guide the surgeon's hand. The paper presents the first stage, concerning the selection of movements for which the support algorithm will be used. The work also contains an analysis of surgical movement repeatability. The suturing movement was investigated in detail by experimental research in order to determine motion repeatability and verify the position of the stitched tissue. Tool trajectory was determined by a motion capture stereovision system. The study has demonstrated that the suturing movement could be considered as repeatable; however, the trajectories performed by different surgeons exhibit some individual characteristics.

  13. Holter triage ambulatory ECG analysis. Accuracy and time efficiency.

    PubMed

    Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K

    1996-01-01

    Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time

  14. Creativity in gifted identification: increasing accuracy and diversity.

    PubMed

    Luria, Sarah R; O'Brien, Rebecca L; Kaufman, James C

    2016-08-01

    Many federal definitions and popular theories of giftedness specify creativity as a core component. Nevertheless, states rely primarily on measures of intelligence for giftedness identification. As minority and culturally diverse students continue to be underrepresented in gifted programs, it is reasonable to ask if increasing the prominence of creativity in gifted identification may help increase balance and equity. In this paper, we explore both layperson and psychometric conceptions of bias and suggest that adding creativity measures to the identification process alleviates both perceptions and the presence of bias. We recognize, however, the logistic and measurement-related challenges to including creativity assessments. © 2016 New York Academy of Sciences.

  15. Using improvement science methods to increase accuracy of surgical consents.

    PubMed

    Mercurio, Patti; Shaffer Ellis, Andrea; Schoettker, Pamela J; Stone, Raymond; Lenk, Mary Anne; Ryckman, Frederick C

    2014-07-01

    The surgical consent serves as a key link in preventing breakdowns in communication that could lead to wrong-patient, wrong-site, or wrong-procedure events. We conducted a quality improvement initiative at a large, urban pediatric academic medical center to reliably increase the percentage of informed consents for surgical and medical procedures with accurate safety data information at the first point of perioperative contact. Improvement activities focused on awareness, education, standardization, real-time feedback and failure identification, and transparency. A total of 54,082 consent forms from 13 surgical divisions were reviewed between May 18, 2011, and November 30, 2012. Between May 2011 and June 2012, the percentage of consents without safety errors increased from a median of 95.4% to 99.7%. Since July 2012, the median has decreased slightly but has remained stable at 99.4%. Our results suggest that effective safety checks allow discovery and prevention of errors.

  16. Accuracy Analysis of a Dam Model from Drone Surveys

    PubMed Central

    Buffi, Giulia; Venturi, Sara

    2017-01-01

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations. PMID:28771185

  17. Accuracy Analysis of a Dam Model from Drone Surveys.

    PubMed

    Ridolfi, Elena; Buffi, Giulia; Venturi, Sara; Manciola, Piergiorgio

    2017-08-03

    This paper investigates the accuracy of models obtained by drone surveys. To this end, this work analyzes how the placement of ground control points (GCPs) used to georeference the dense point cloud of a dam affects the resulting three-dimensional (3D) model. Images of a double arch masonry dam upstream face are acquired from drone survey and used to build the 3D model of the dam for vulnerability analysis purposes. However, there still remained the issue of understanding the real impact of a correct GCPs location choice to properly georeference the images and thus, the model. To this end, a high number of GCPs configurations were investigated, building a series of dense point clouds. The accuracy of these resulting dense clouds was estimated comparing the coordinates of check points extracted from the model and their true coordinates measured via traditional topography. The paper aims at providing information about the optimal choice of GCPs placement not only for dams but also for all surveys of high-rise structures. The knowledge a priori of the effect of the GCPs number and location on the model accuracy can increase survey reliability and accuracy and speed up the survey set-up operations.

  18. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  19. Smoothness-Increasing Accuracy-Conserving (SIAC) Filters for Post-Processing Unstructured Discontinuous Galerkin Fields

    DTIC Science & Technology

    2015-08-27

    Robert M. Kirby School of Computing University of Utah Abstract Although discontinuous and continuous Galerkin methods have advantages mathematically ...of this proposal is to develop smoothness-increasing accuracy-conserving filters that respect the mathematical properties of the data while providing...tetrahedral meshes. In Particular, we propose to contribute both mathematically and algorithmically to the class of smoothness-increasing and accuracy

  20. Does Bleach Processing Increase the Accuracy of Sputum Smear Microscopy for Diagnosing Pulmonary Tuberculosis?▿

    PubMed Central

    Cattamanchi, A.; Davis, J. L.; Pai, M.; Huang, L.; Hopewell, P. C.; Steingart, K. R.

    2010-01-01

    Bleach digestion of sputum prior to smear preparation has been reported to increase the yield of microscopy for diagnosing pulmonary tuberculosis, even in high-HIV-prevalence settings. To determine the diagnostic accuracy of bleach microscopy, we updated a systematic review published in 2006 and applied the Grading of Recommendations Assessment, Development, and Evaluation framework to rate the overall quality of the evidence. We searched multiple databases (as of January 2009) for primary studies in all languages comparing bleach and direct microscopy. We assessed study quality using a validated tool and heterogeneity by standard methods. We used hierarchical summary receiver operating characteristic (HSROC) analysis to calculate summary estimates of diagnostic accuracy and random-effects meta-analysis to pool sensitivity and specificity differences. Of 14 studies (11 papers) included, 9 evaluated bleach centrifugation and 5 evaluated bleach sedimentation. Overall, examination of bleach-processed versus direct smears led to small increases in sensitivity (for bleach centrifugation, 6% [95% confidence interval {CI} = 3 to 10%, P = 0.001]; for bleach sedimentation, 9% [95% CI = 4 to 14%, P = 0.001]) and small decreases in specificity (for bleach centrifugation, −3% [95% CI = −4% to −1%, P = 0.004]; for bleach sedimentation, −2% [95% CI = −5% to 0%, P = 0.05]). Similarly, analysis of HSROC curves suggested little or no improvement in diagnostic accuracy. The quality of evidence was rated very low for both bleach centrifugation and bleach sedimentation. This updated systematic review suggests that the benefits of bleach processing are less than those described previously. Further research should focus on alternative approaches to optimizing smear microscopy, such as light-emitting diode fluorescence microscopy and same-day sputum collection strategies. PMID:20421442

  1. Does bleach processing increase the accuracy of sputum smear microscopy for diagnosing pulmonary tuberculosis?

    PubMed

    Cattamanchi, A; Davis, J L; Pai, M; Huang, L; Hopewell, P C; Steingart, K R

    2010-07-01

    Bleach digestion of sputum prior to smear preparation has been reported to increase the yield of microscopy for diagnosing pulmonary tuberculosis, even in high-HIV-prevalence settings. To determine the diagnostic accuracy of bleach microscopy, we updated a systematic review published in 2006 and applied the Grading of Recommendations Assessment, Development, and Evaluation framework to rate the overall quality of the evidence. We searched multiple databases (as of January 2009) for primary studies in all languages comparing bleach and direct microscopy. We assessed study quality using a validated tool and heterogeneity by standard methods. We used hierarchical summary receiver operating characteristic (HSROC) analysis to calculate summary estimates of diagnostic accuracy and random-effects meta-analysis to pool sensitivity and specificity differences. Of 14 studies (11 papers) included, 9 evaluated bleach centrifugation and 5 evaluated bleach sedimentation. Overall, examination of bleach-processed versus direct smears led to small increases in sensitivity (for bleach centrifugation, 6% [95% confidence interval [CI] = 3 to 10%, P = 0.001]; for bleach sedimentation, 9% [95% CI = 4 to 14%, P = 0.001]) and small decreases in specificity (for bleach centrifugation, -3% [95% CI = -4% to -1%, P = 0.004]; for bleach sedimentation, -2% [95% CI = -5% to 0%, P = 0.05]). Similarly, analysis of HSROC curves suggested little or no improvement in diagnostic accuracy. The quality of evidence was rated very low for both bleach centrifugation and bleach sedimentation. This updated systematic review suggests that the benefits of bleach processing are less than those described previously. Further research should focus on alternative approaches to optimizing smear microscopy, such as light-emitting diode fluorescence microscopy and same-day sputum collection strategies.

  2. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  3. Dust trajectory sensor: Accuracy and data analysis

    SciTech Connect

    Xie, J.; Horanyi, M.; Sternovsky, Z.; Gruen, E.; Duncan, N.; Drake, K.; Le, H.; Auer, S.; Srama, R.

    2011-10-15

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Gruen, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Gruen, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1 deg. in direction.

  4. Dust trajectory sensor: Accuracy and data analysis

    NASA Astrophysics Data System (ADS)

    Xie, J.; Sternovsky, Z.; Grün, E.; Auer, S.; Duncan, N.; Drake, K.; Le, H.; Horanyi, M.; Srama, R.

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008), 10.1063/1.2960566] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010), 10.1016/j.nima.2010.06.091]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  5. Increasing the accuracy and precision of relative telomere length estimates by RT qPCR.

    PubMed

    Eastwood, Justin R; Mulder, Ellis; Verhulst, Simon; Peters, Anne

    2017-08-14

    As attrition of telomeres, DNA caps that protect chromosome integrity, is accelerated by various forms of stress, telomere length (TL) has been proposed as an indicator of lifetime accumulated stress. In ecological studies, it has been used to provide insights into ageing, life history trade-offs, the costs of reproduction and disease. qPCR is a high-throughput and cost-effective tool to measure relative TL (rTL) that can be applied to newly collected and archived ecological samples. However, qPCR is susceptible to error both from the method itself and pre-analytical steps. Here, repeatability was assessed overall and separately across multiple levels (intra-assay, inter-assay and inter-extraction) to elucidate the causes of measurement error, as a step towards improving precision. We also tested how accuracy, defined as the correlation between the "gold standard" for TL estimation (telomere restriction fragment length analysis with in-gel hybridization), could be improved. We find qPCR repeatability (intra- and inter-assay levels) to be at similar levels across three common storage media (ethanol, Longmire's and Queen's). However, inter-extraction repeatability was 50% lower for samples stored in Queen's lysis buffer, indicating storage medium can influence precision. Precision as well as accuracy could be increased by estimating rTL from multiple qPCR reactions and from multiple extractions. Repetition increased statistical power equivalent to a 25% (single extraction analysed twice) and 17% (two extractions) increase in sample size. Overall, this study identifies novel sources of variability in high-throughput telomere quantification and provides guidance on sampling strategy design and how to increase rTL precision and accuracy. © 2017 John Wiley & Sons Ltd.

  6. Accuracy of thick-target micro-PIXE analysis

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Teesdale, W. J.; Wang, J.-X.

    1990-04-01

    The accuracy attainable in micro-PIXE analysis is assessed in terms of the X-ray production model and its assumptions, physical realities of the specimen, the necessary data base, and techniques of standardization. NTIS reference materials are analyzed to provide the experimental tests of accuracy.

  7. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  8. Meta-analysis of diagnostic accuracy studies in mental health.

    PubMed

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-11-01

    To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  9. Analysis of accuracy in photogrammetric roughness measurements

    NASA Astrophysics Data System (ADS)

    Olkowicz, Marcin; Dąbrowski, Marcin; Pluymakers, Anne

    2017-04-01

    Regarding permeability, one of the most important features of shale gas reservoirs is the effective aperture of cracks opened during hydraulic fracturing, both propped and unpropped. In a propped fracture, the aperture is controlled mostly by proppant size and its embedment, and fracture surface roughness only has a minor influence. In contrast, in an unpropped fracture aperture is controlled by the fracture roughness and the wall displacement. To measure fracture surface roughness, we have used the photogrammetric method since it is time- and cost-efficient. To estimate the accuracy of this method we compare the photogrammetric measurements with reference measurements taken with a White Light Interferometer (WLI). Our photogrammetric setup is based on high resolution 50 Mpx camera combined with a focus stacking technique. The first step for photogrammetric measurements is to determine the optimal camera positions and lighting. We compare multiple scans of one sample, taken with different settings of lighting and camera positions, with the reference WLI measurement. The second step is to perform measurements of all studied fractures with the parameters that produced the best results in the first step. To compare photogrammetric and WLI measurements we regrid both data sets onto a regular 10 μm grid and determined the best fit, followed by a calculation of the difference between the measurements. The first results of the comparison show that for 90 % of measured points the absolute vertical distance between WLI and photogrammetry is less than 10 μm, while the mean absolute vertical distance is 5 μm. This proves that our setup can be used for fracture roughness measurements in shales.

  10. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    NASA Astrophysics Data System (ADS)

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-05-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

  11. Process improvement methods increase the efficiency, accuracy, and utility of a neurocritical care research repository.

    PubMed

    O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor

    2012-08-01

    Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.

  12. Process Improvement Methods Increase the Efficiency, Accuracy and Utility of a Neurocritical Care Research Repository

    PubMed Central

    O’Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W. Taylor

    2012-01-01

    Background Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the Neuro ICU, their differing length and complexity of hospital stay and the substantial amount of desired information can complicate the process of data collection. Methods We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. Using the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. Results During a six month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 hours per day. Conclusions By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a three-fold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers. PMID:22434546

  13. Assessing Predictive Accuracy in Discriminant Analysis.

    ERIC Educational Resources Information Center

    Huberty, Carl J.; And Others

    1987-01-01

    Three estimates of the probabilities of correct classification in predictive discriminant analysis were computed using mathematical formulas, resubstitution, and external analyses: (1) optimal hit rate; (2) actual hit rate; and (3) expected actual hit rate. Methods were compared using Monte Carlo sampling from two data sets. (Author/GDC)

  14. Noninvasive Glucose Monitoring: Increasing Accuracy by Combination of Multi-Technology and Multi-Sensors

    PubMed Central

    Harman-Boehm, Ilana; Gal, Avner; Raykhman, Alexander M.; Naidis, Eugene; Mayzel, Yulia

    2010-01-01

    Background The main concern in noninvasive (NI) glucose monitoring methods is to achieve high accuracy results despite the fact that no direct blood or interstitial fluid glucose measurement is performed. An alternative approach to increase the accuracy of NI glucose measurement was previously suggested through a combination of three NI methods: ultrasonic, electromagnetic, and thermal. This paper provides further explanation about the nature of the implemented technologies, and multi-sensors are presented, as well as a detailed elaboration on the novel algorithm for data analysis. Methods Clinical trials were performed on two different days. During the first day, calibration and six subsequent measurements were performed. During the second day, a “full day” session of about 10 hours took place. During the trial, type 1 and 2 diabetes patients were calibrated and evaluated with GlucoTrack® glucose monitor against HemoCue® (Glucose 201+). Results A total of 91 subjects were tested during the trial period. Clarke error grid (CEG) analysis shows 96% of the readings (on both days 1 and 2) fall in the clinically accepted A and B zones, of which 60% are within zone A. The absolute relative differences (ARDs) yield mean and median values of 22.4% and 15.9%, respectively. The CEG for day 2 of the trial shows 96% of the points in zones A and B, with 57% of the values in zone A. Mean and median ARD values for the readings on day 2 are 23.4% and 16.5%, respectively. The intervals between day 1 (calibration and measurements) and day 2 (measurements only) were 1–22 days, with a median of 6 days. Conclusions The presented methodology shows that increased accuracy was indeed achieved by combining multi-technology and multi-sensors. The approach of integration contributes to increasing the signal-to-noise ratio (glucose to other contributors). A combination of several technologies allows compensation of a possible aberration in one modality by the others, while multi

  15. Digital templating in total hip arthroplasty: Additional anteroposterior hip view increases the accuracy

    PubMed Central

    Stigler, Sophia K; Müller, Franz J; Pfaud, Sebastian; Zellner, Michael; Füchtmeier, Bernd

    2017-01-01

    AIM To analyze planning total hip arthroplasty (THA) with an additional anteroposterior hip view may increases the accuracy of preoperative planning in THA. METHODS We conducted prospective digital planning in 100 consecutive patients: 50 of these procedures were planned using pelvic overview only (first group), and the other 50 procedures were planned using pelvic overview plus antero-posterior (a.p.) hip view (second group). The planning and the procedure of each patient were performed exclusively by the senior surgeon. Fifty procedures with retrospective analogues planning were used as the control group (group zero). After the procedure, the planning was compared with the eventually implanted components (cup and stem). For statistic analysis the χ2 test was used for nominal variables and the t test was used for a comparison of continuous variables. RESULTS Preoperative planning with an additional a.p. hip view (second group) significantly increased the exact component correlation when compared to pelvic overview only (first group) for both the acetabular cup and the femoral stem (76% cup and 66% stem vs 54% cup and 32% stem). When considering planning ± 1 size, the accuracy in the second group was 96% (48 of 50 patients) for the cup and 94% for the stem (47 of 50 patients). In the analogue control group (group zero), an exact correlation was observed in only 1/3 of the cases. CONCLUSION Digital THA planning performed by the operating surgeon and based on additional a.p. hip view significantly increases the correlation between preoperative planning and eventual implant sizes. PMID:28144576

  16. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  17. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  18. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  19. The Combination of Cyst Fluid Carcinoembryonic Antigen, Cytology and Viscosity Increases the Diagnostic Accuracy of Mucinous Pancreatic Cysts

    PubMed Central

    Oh, Se Hun; Lee, Jong Kyun; Lee, Kyu Taek; Lee, Kwang Hyuck; Woo, Young Sik; Noh, Dong Hyo

    2017-01-01

    Background/Aims The objective of this study was to investigate the value of cyst fluid carcinoembryonic antigen (CEA) in combination with cytology and viscosity for the differential diagnosis of pancreatic cysts. Methods We retrospectively reviewed our data for patients who underwent endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) and cyst fluid analysis. We investigated the sensitivity, specificity and accuracy of the combination of cyst fluid CEA, cytology and viscosity testing. Results A total of 177 patients underwent EUS-FNA and cyst fluid analysis. Of these, 48 subjects were histologically and clinically confirmed to have pancreatic cysts and were therefore included in the analysis. Receiver operator curve analysis demonstrated that the optimal cutoff value of cyst fluid CEA for differentiating mucinous versus nonmucinous cystic lesions was 48.6 ng/mL. The accuracy of cyst fluid CEA (39/48, 81.3%) was greater than the accuracy of cytology (23/45, 51.1%) or the string sign (33/47, 70.2%). Cyst fluid CEA in combination with cytology and string sign assessment exhibited the highest accuracy (45/48, 93.8%). Conclusions Cyst fluid CEA was the most useful single test for identifying mucinous pancreatic cysts. The addition of cytology and string sign assessment to cyst fluid CEA increased the overall accuracy for the diagnosis of mucinous pancreatic cysts. PMID:27609484

  20. The National Shipbuilding Research Program. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1985-08-01

    Process Analysis Via Accuracy Control U.S. DEPARTMENT OF TRANSPORTATION Maritime Administration in cooperation with Todd Pacific Shipyards...AUG 1985 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE The National Shipbuilding Research Program Process Analysis Via...lighting, retraining work- ers, or other such approaches. This product of A/C is called process or method analysis. Process analysis involves a

  1. The effect of increased ambient lighting on detection accuracy in uniform and anatomical backgrounds

    NASA Astrophysics Data System (ADS)

    Pollard, Benjamin J.; Chawla, Amarpreet S.; Hashimoto, Noriyuki; Samei, Ehsan

    2008-03-01

    Under typical dark conditions found in reading rooms, a reader's pupils will contract and dilate as the visual focus intermittently shifts between the high luminance monitor and the darker background wall, resulting in increased visual fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however, minimize these visual adjustments and potentially improve reader comfort and accuracy. This paper details results from two psychophysical studies designed to determine the effect of a controlled ambient lighting increase on observer detection of subtle objects and lesions viewed on a DICOM-calibrated medical-grade LCD. The first study examined the effect of increased ambient lighting on detection of subtle objects embedded within a uniform background, while the second study examined observer detection performance of subtle cancerous lesions in mammograms and chest radiographs. In both studies, observers were presented with images under a dark room condition (1 lux) and an increased room illuminance level (50 lux) for which the luminance level of the diffusely reflected light from the background wall was approximately equal to that of the displayed image. The display was calibrated to an effective luminance ratio of 409 for both lighting conditions. Observer detection performance under each room illuminance condition was then compared. Identification of subtle objects embedded within the uniform background improved from 59% to 67%, while detection time decreased slightly with additional illuminance. An ROC analysis of the anatomical image results revealed that observer AUC values remained constant while detection time decreased under increased illuminance. The results provide evidence that an ambient lighting increase may be possible without compromising diagnostic efficacy.

  2. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  3. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    PubMed Central

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-01-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078

  4. Integrating conventional classifiers with a GIS expert system to increase the accuracy of invasive species mapping

    NASA Astrophysics Data System (ADS)

    Masocha, Mhosisi; Skidmore, Andrew K.

    2011-06-01

    Mapping the cover of invasive species using remotely sensed data alone is challenging, because many invaders occur as mid-level canopy species or as subtle understorey species and therefore contribute little to the spectral signatures captured by passive remote sensing devices. In this study, two common non-parametric classifiers namely, the neural network and support vector machine were used to map four cover classes of the invasive shrub Lantana camara in a protected game reserve and the adjacent area under communal land management in Zimbabwe. These classifiers were each combined with a geographic information system (GIS) expert system, in order to test whether the new hybrid classifiers yielded significantly more accurate invasive species cover maps than the single classifiers. The neural network, when used on its own, mapped the cover of L. camara with an overall accuracy of 71% and a Kappa index of agreement of 0.61. When the neural network was combined with an expert system, the overall accuracy and Kappa index of agreement significantly increased to 83% and 0.77, respectively. Similarly, the support vector machine achieved an overall accuracy of 64% with a Kappa index of agreement of 0.52, whereas the hybrid support vector machine and expert system classifier achieved a significantly higher overall accuracy of 76% and a Kappa index of agreement of 0.67. These results suggest that integrating conventional image classifiers with an expert system increases the accuracy of invasive species mapping.

  5. Using Self-Monitoring to Increase Attending to Task and Academic Accuracy in Children with Autism

    ERIC Educational Resources Information Center

    Holifield, Cassandra; Goodman, Janet; Hazelkorn, Michael; Heflin, L. Juane

    2010-01-01

    This study was conducted to investigate the effectiveness of a self-monitoring procedure on increasing attending to task and academic accuracy in two elementary students with autism in their self-contained classroom. A multiple baseline across participants in two academic subject areas was used to assess the effectiveness of the intervention. Both…

  6. Measurement of characteristics and phase modulation accuracy increase of LC SLM "HoloEye PLUTO VIS"

    NASA Astrophysics Data System (ADS)

    Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, R. S.; Starikov, S. N.

    2014-09-01

    Phase liquid crystal spatial light modulators (LC SLM) are actively integrated in various optical systems for dynamic diffractive optical elements imaging. To achieve the best performance, high stability and linearity of phase modulation is required. This article presents results of measurement of characteristics and phase modulation accuracy increase of state of the art LC SLM with HD resolution "HoloEye PLUTO VIS".

  7. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  8. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  9. Potential of marker selection to increase prediction accuracy of genomic selection in soybean (Glycine max L.).

    PubMed

    Ma, Yansong; Reif, Jochen C; Jiang, Yong; Wen, Zixiang; Wang, Dechun; Liu, Zhangxiong; Guo, Yong; Wei, Shuhong; Wang, Shuming; Yang, Chunming; Wang, Huicai; Yang, Chunyan; Lu, Weiguo; Xu, Ran; Zhou, Rong; Wang, Ruizhen; Sun, Zudong; Chen, Huaizhu; Zhang, Wanhai; Wu, Jian; Hu, Guohua; Liu, Chunyan; Luan, Xiaoyan; Fu, Yashu; Guo, Tai; Han, Tianfu; Zhang, Mengchen; Sun, Bincheng; Zhang, Lei; Chen, Weiyuan; Wu, Cunxiang; Sun, Shi; Yuan, Baojun; Zhou, Xinan; Han, Dezhi; Yan, Hongrui; Li, Wenbin; Qiu, Lijuan

    Genomic selection is a promising molecular breeding strategy enhancing genetic gain per unit time. The objectives of our study were to (1) explore the prediction accuracy of genomic selection for plant height and yield per plant in soybean [Glycine max (L.) Merr.], (2) discuss the relationship between prediction accuracy and numbers of markers, and (3) evaluate the effect of marker preselection based on different methods on the prediction accuracy. Our study is based on a population of 235 soybean varieties which were evaluated for plant height and yield per plant at multiple locations and genotyped by 5361 single nucleotide polymorphism markers. We applied ridge regression best linear unbiased prediction coupled with fivefold cross-validations and evaluated three strategies of marker preselection. For plant height, marker density and marker preselection procedure impacted prediction accuracy only marginally. In contrast, for grain yield, prediction accuracy based on markers selected with a haplotype block analyses-based approach increased by approximately 4 % compared with random or equidistant marker sampling. Thus, applying marker preselection based on haplotype blocks is an interesting option for a cost-efficient implementation of genomic selection for grain yield in soybean breeding.

  10. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-03-21

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

  11. Neutron electric dipole moment and possibilities of increasing accuracy of experiments

    SciTech Connect

    Serebrov, A. P. Kolomenskiy, E. A.; Pirozhkov, A. N.; Krasnoshchekova, I. A.; Vasiliev, A. V.; Polyushkin, A. O.; Lasakov, M. S.; Murashkin, A. N.; Solovey, V. A.; Fomin, A. K.; Shoka, I. V.; Zherebtsov, O. M.; Aleksandrov, E. B.; Dmitriev, S. P.; Dovator, N. A.; Geltenbort, P.; Ivanov, S. N.; Zimmer, O.

    2016-01-15

    The paper reports the results of an experiment on searching for the neutron electric dipole moment (EDM), performed on the ILL reactor (Grenoble, France). The double-chamber magnetic resonance spectrometer (Petersburg Nuclear Physics Institute (PNPI)) with prolonged holding of ultra cold neutrons has been used. Sources of possible systematic errors are analyzed, and their influence on the measurement results is estimated. The ways and prospects of increasing accuracy of the experiment are discussed.

  12. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  13. Mesoscale modelling methodology based on nudging to increase accuracy in WRA

    NASA Astrophysics Data System (ADS)

    Mylonas Dirdiris, Markos; Barbouchi, Sami; Hermmann, Hugo

    2016-04-01

    The offshore wind energy has recently become a rapidly growing renewable energy resource worldwide, with several offshore wind projects in development in different planning stages. Despite of this, a better understanding of the atmospheric interaction within the marine atmospheric boundary layer (MABL) is needed in order to contribute to a better energy capture and cost-effectiveness. Light has been thrown in observational nudging as it has recently become an innovative method to increase the accuracy of wind flow modelling. This particular study focuses on the observational nudging capability of Weather Research and Forecasting (WRF) and ways the uncertainty of wind flow modelling in the wind resource assessment (WRA) can be reduced. Finally, an alternative way to calculate the model uncertainty is pinpointed. Approach WRF mesoscale model will be nudged with observations from FINO3 at three different heights. The model simulations with and without applying observational nudging will be verified against FINO1 measurement data at 100m. In order to evaluate the observational nudging capability of WRF two ways to derive the model uncertainty will be described: one global uncertainty and an uncertainty per wind speed bin derived using the recommended practice of the IEA in order to link the model uncertainty to a wind energy production uncertainty. This study assesses the observational data assimilation capability of WRF model within the same vertical gridded atmospheric column. The principal aim is to investigate whether having observations up to one height could improve the simulation at a higher vertical level. The study will use objective analysis implementing a Cress-man scheme interpolation to interpolate the observation in time and in sp ace (keeping the horizontal component constant) to the gridded analysis. Then the WRF model core will incorporate the interpolated variables to the "first guess" to develop a nudged simulation. Consequently, WRF with and without

  14. Increased genomic prediction accuracy in wheat breeding through spatial adjustment of field trial data.

    PubMed

    Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav

    2013-12-09

    In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.

  15. Detection of increased vasa vasorum in artery walls: improving CT number accuracy using image deconvolution

    NASA Astrophysics Data System (ADS)

    Rajendran, Kishore; Leng, Shuai; Jorgensen, Steven M.; Abdurakhimova, Dilbar; Ritman, Erik L.; McCollough, Cynthia H.

    2017-03-01

    Changes in arterial wall perfusion are an indicator of early atherosclerosis. This is characterized by an increased spatial density of vasa vasorum (VV), the micro-vessels that supply oxygen and nutrients to the arterial wall. Detection of increased VV during contrast-enhanced computed tomography (CT) imaging is limited due to contamination from blooming effect from the contrast-enhanced lumen. We report the application of an image deconvolution technique using a measured system point-spread function, on CT data obtained from a photon-counting CT system to reduce blooming and to improve the CT number accuracy of arterial wall, which enhances detection of increased VV. A phantom study was performed to assess the accuracy of the deconvolution technique. A porcine model was created with enhanced VV in one carotid artery; the other carotid artery served as a control. CT images at an energy range of 25-120 keV were reconstructed. CT numbers were measured for multiple locations in the carotid walls and for multiple time points, pre and post contrast injection. The mean CT number in the carotid wall was compared between the left (increased VV) and right (control) carotid arteries. Prior to deconvolution, results showed similar mean CT numbers in the left and right carotid wall due to the contamination from blooming effect, limiting the detection of increased VV in the left carotid artery. After deconvolution, the mean CT number difference between the left and right carotid arteries was substantially increased at all the time points, enabling detection of the increased VV in the artery wall.

  16. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  17. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  18. Low-cost commodity depth sensor comparison and accuracy analysis

    NASA Astrophysics Data System (ADS)

    Breuer, Timo; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    Low cost depth sensors have been a huge success in the field of computer vision and robotics, providing depth images even in untextured environments. The same characteristic applies to the Kinect V2, a time-of-flight camera with high lateral resolution. In order to assess advantages of the new sensor over its predecessor for standard applications, we provide an analysis of measurement noise, accuracy and other error sources with the Kinect V2. We examined the raw sensor data by using an open source driver. Further insights on the sensor design and examples of processing techniques are given to completely exploit the unrestricted access to the device.

  19. Accuracy analysis of pointing control system of solar power station

    NASA Technical Reports Server (NTRS)

    Hung, J. C.; Peebles, P. Z., Jr.

    1978-01-01

    The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.

  20. Accuracy analysis of point cloud modeling for evaluating concrete specimens

    NASA Astrophysics Data System (ADS)

    D'Amico, Nicolas; Yu, Tzuyang

    2017-04-01

    Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.

  1. The Meta-Analysis of Clinical Judgment Project: Effects of Experience on Judgment Accuracy

    ERIC Educational Resources Information Center

    Spengler, Paul M.; White, Michael J.; Aegisdottir, Stefania; Maugherman, Alan S.; Anderson, Linda A.; Cook, Robert S.; Nichols, Cassandra N.; Lampropoulos, Georgios K.; Walker, Blain S.; Cohen, Genna R.; Rush, Jeffrey D.

    2009-01-01

    Clinical and educational experience is one of the most commonly studied variables in clinical judgment research. Contrary to clinicians' perceptions, clinical judgment researchers have generally concluded that accuracy does not improve with increased education, training, or clinical experience. In this meta-analysis, the authors synthesized…

  2. Effects of increasing time delays on pitch-matching accuracy in trained singers and untrained individuals.

    PubMed

    Estis, Julie M; Coblentz, Joana K; Moore, Robert E

    2009-07-01

    Trained singers (TS) generally demonstrate accurate pitch matching, but this ability varies within the general population. Pitch-matching accuracy, given increasing silence intervals of 5, 15, and 25 seconds between target tones and vocal matches, was investigated in TS and untrained individuals. A relationship between pitch discrimination and pitch matching was also examined. Thirty-two females (20-30 years) were grouped based on individual vocal training and performance in an immediate pitch-matching task. Participants matched target pitches following time delays, and completed a pitch discrimination task, which required the classification of two tones as same or different. TS and untrained accurate participants performed comparably on all pitch-matching tasks, while untrained inaccurate participants performed significantly less accurately than the other two groups. Performances declined across groups as intervals of silence increased, suggesting degradation of pitch matching as pitch memory was taxed. A significant relationship between pitch discrimination and pitch matching was revealed across participants.

  3. High frequency rTMS over the left parietal lobule increases non-word reading accuracy.

    PubMed

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-09-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe (IPL), which operates in phonological computation. This study aimed to clarify the specific contribution of IPL and STG to reading aloud and to evaluate the possibility of modulating healthy participants' task performance using high frequency repetitive TMS (hf-rTMS). The main finding is that hf-rTMS over the left IPL improves non-word reading accuracy (fewer errors), whereas hf-rTMS over the right STG selectively decreases text-reading accuracy (more errors). These results confirm the prevalent role of the left IPL in grapheme-to-phoneme conversion. The non-word reading improvement after Left-IPL stimulation provide a direct link between left IPL activation and advantages in sublexical procedures, mainly involved in non-word reading. Results indicate also the specific involvement of STG in reading morphologically complex words and in processing the representation of the text. The text reading impairment after stimulation of the right STG can be interpreted in light of an inhibitory influence on the homologous area. In sum, data document that hf-rTMS is effective in modulating the reading accuracy of expert readers and that the modulation is task related and site specific. These findings suggest new perspectives for the treatment of reading disorders.

  4. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  5. Increased sensitivity of patch testing by standardized tape stripping beforehand: a multicentre diagnostic accuracy study.

    PubMed

    Dickel, Heinrich; Kreft, Burkhard; Kuss, Oliver; Worm, Margitta; Soost, Stephanie; Brasch, Jochen; Pfützner, Wolfgang; Grabbe, Jürgen; Angelova-Fischer, Irena; Elsner, Peter; Fluhr, Joachim; Altmeyer, Peter; Geier, Johannes

    2010-05-01

    As a modification of patch testing, the strip patch test was established to obtain more sensitive and reliable test results. Comparative data on diagnostic accuracy for both tests are missing. To compare the diagnostic accuracy of strip patch tests and patch tests in detecting sensitizations in patients with suspected allergic contact dermatitis by using patient history as the reference standard. In a multicentre, prospective, investigator-blinded study 790 patients were enrolled. The defined reference standard was established prior to patch testing. Patch tests were performed with nickel sulfate, potassium dichromate, and lanolin alcohol. Duplicate tests were simultaneously performed on both sides of the back, of which one randomly chosen side was tape stripped beforehand, according to a standardized procedure. Primary outcome was the difference in sensitivity between strip patch test and patch test. Seven hundred and eighty-seven patients were included in the analysis. Strip patch tests detected considerably more sensitization to nickel sulfate and potassium dichromate than patch tests: differences of sensitivities were 16.4% (95% CI, 8.7-24.1%) for nickel sulfate and 25.0% (95% CI, 8.9-41.0%) for potassium dichromate, both favouring the strip patch test. The standardized strip patch test proved to be accurate and clinically safe and is promising to improve diagnosis of allergic contact dermatitis beyond the patch test.

  6. Reduced conductivity dependence method for increase of dipole localization accuracy in the EEG inverse problem.

    PubMed

    Yitembe, Bertrand Russel; Crevecoeur, Guillaume; Van Keer, Roger; Dupre, Luc

    2011-05-01

    The EEG is a neurological diagnostic tool with high temporal resolution. However, when solving the EEG inverse problem, its localization accuracy is limited because of noise in measurements and available uncertainties of the conductivity value in the forward model evaluations. This paper proposes the reduced conductivity dependence (RCD) method for decreasing the localization error in EEG source analysis by limiting the propagation of the uncertain conductivity values to the solutions of the inverse problem. We redefine the traditional EEG cost function, and in contrast to previous approaches, we introduce a selection procedure of the EEG potentials. The selected potentials are, as low as possible, affected by the uncertainties of the conductivity when solving the inverse problem. We validate the methodology on the widely used three-shell spherical head model with a single electrical dipole and multiple dipoles as source model. The proposed RCD method enhances the source localization accuracy with a factor ranging between 2 and 4, dependent on the dipole location and the noise in measurements. © 2011 IEEE

  7. Increasing accuracy and precision of digital image correlation through pattern optimization

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Hochhalter, J. D.; Ruggles, T. J.; Cannon, A. H.

    2017-04-01

    The accuracy and precision of digital image correlation (DIC) is based on three primary components: image acquisition, image analysis, and the subject of the image. Focus on the third component, the image subject, has been relatively limited and primarily concerned with comparing pseudo-random surface patterns. In the current work, a strategy is proposed for the creation of optimal DIC patterns. In this strategy, a pattern quality metric is developed as a combination of quality metrics from the literature rather than optimization based on any single one of them. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. Specifically, sum of square of subset intensity gradients (SSSIG) was found to be the metric most strongly correlated to DIC accuracy and thus is the main component of the newly proposed pattern quality metric. A term related to the secondary auto-correlation peak height is also part of the proposed quality metric which effectively acts as a constraint upon SSSIG ensuring that a regular (e.g., checkerboard-type) pattern is not achieved. The combined pattern quality metric is used to generate a pattern that was on average 11.6% more accurate than a randomly generated pattern in a suite of numerical experiments. Furthermore, physical experiments were performed which confirm that there is indeed improvement of a similar magnitude in DIC measurements for the optimized pattern compared to a random pattern.

  8. A novel 3-dimensional electromagnetic guidance system increases intraoperative microwave antenna placement accuracy.

    PubMed

    Sastry, Amit V; Swet, Jacob H; Murphy, Keith J; Baker, Erin H; Vrochides, Dionisios; Martinie, John B; McKillop, Iain H; Iannitti, David A

    2017-09-13

    Failure to locate lesions and accurately place microwave antennas can lead to incomplete tumor ablation. The Emprint™ SX Ablation Platform employs real-time 3D-electromagnetic spatial antenna tracking to generate intraoperative laparoscopic antenna guidance. We sought to determine whether Emprint™ SX affected time/accuracy of antenna-placement in a laparoscopic training model. Targets (7-10 mm) were set in agar within a laparoscopic training device. Novices (no surgical experience), intermediates (surgical residents), and experts (HPB-surgeons) were asked to locate and hit targets using a MWA antenna (10-ultrasound only, 10-Emprint™ SX). Time to locate target, number of attempts to hit the target, first-time hit rate, and time from initiating antenna advance to hitting the target were measured. Participants located 100% of targets using ultrasound, with experts taking significantly less time than novices and intermediates. Using ultrasound only, successful hit-rates were 70% for novices and 90% for intermediates and experts. Using Emprint™ SX, successful hit rates for all 3-groups were 100%, with significantly increased first-time hit-rates and reduced time required to hit targets compared to ultrasound only. Emprint™ SX significantly improved accuracy and speed of antenna-placement independent of experience, and was particularly beneficial for novice users. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  9. Rethinking speed theories of cognitive development. Increasing the rate of recall without affecting accuracy.

    PubMed

    Cowan, Nelson; Elliott, Emily M; Saults, J Scott; Nugent, Lara D; Bomb, Pinky; Hismjatullina, Anna

    2006-01-01

    Researchers have suggested that developmental improvements in immediate recall stem from increases in the speed of mental processes. However, that inference has depended on evidence from correlation, regression, and structural equation modeling. We provide counter-examples in two experiments in which the speed of spoken recall was manipulated. In one experiment, second-grade children and adults recalled lists of digits more quickly than usual when the lists were presented at a rapid rate of two items per second. In a second experiment, children received lists at a rate of one item per second; half the children were trained (successfully) to speak their responses more quickly than usual, at a rate similar to adults' usual rate. Recall accuracy was completely unaffected by either of these response-speed manipulations. Thus, although response rate is a strong marker of an individual's maturational level, it does not appear to determine the accuracy of immediate recall. These results have important methodological and theoretical implications for human development.

  10. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  11. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  12. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in

  13. Rethinking Speed Theories of Cognitive Development: Increasing the Rate of Recall Without Affecting Accuracy

    PubMed Central

    Cowan, Nelson; Elliott, Emily M.; Saults, J. Scott; Nugent, Lara D.; Bomb, Pinky; Hismjatullina, Anna

    2008-01-01

    Researchers have suggested that developmental improvements in immediate recall stem from increases in the speed of mental processes. However, that inference has depended on evidence from correlation, regression, and structural equation modeling. We provide counterexamples in two experiments in which the speed of spoken recall is manipulated. In one experiment, second-grade children and adults recalled lists of digits more quickly than usual when the lists were presented at a rapid rate of 2 items per second (items/s). In a second experiment, children received lists at a 1 item/s rate but half of them were successfully trained to respond more quickly than usual, and similar to adults' usual rate. Recall accuracy was completely unaffected by either of these response-speed manipulations. Although response rate is a strong marker of an individual's maturational level, it thus does not appear to determine immediate recall. There are important implications for developmental methodology. PMID:16371146

  14. Diagnostic accuracy of increased urinary cortisol/cortisone ratio to differentiate ACTH-dependent Cushing's syndrome.

    PubMed

    Ceccato, Filippo; Trementino, Laura; Barbot, Mattia; Antonelli, Giorgia; Plebani, Mario; Denaro, Luca; Regazzo, Daniela; Rea, Federico; Frigo, Anna Chiara; Concettoni, Carolina; Boscaro, Marco; Arnaldi, Giorgio; Scaroni, Carla

    2017-06-07

    Differential diagnosis between Cushing's Disease (CD) and Ectopic ACTH Syndrome (EAS) may be a pitfall for endocrinologists. The increasing use in clinical practice of chromatography and mass spectrometry improves the measurement of urinary free cortisol (UFF) and cortisone (UFE). We have recently observed that cortisol to cortisone ratio (FEr) was higher in a small series of EAS; in this study we collected a larger number of ACTH-dependent Cushing's Syndrome (CS) to study the role of FEr to characterize the source of corticotropin secretion. High-pressure liquid chromatography with UV detection (HPLC-UV, n=35) or liquid chromatography-tandem mass spectrometry (LC-MS/MS, n=72) were used to measure UFF, UFE and FEr in 83 patients with CD and 24 with EAS. UFF, UFE and FEr levels were higher in EAS than in CD (UFF: 6671 vs 549 nmol/24 hours; UFE: 2069 vs 464 nmol/24 hours; FEr: 4.13 vs 0.97; all P<.001). FEr >1.15 (the best ROC-based threshold) was able to distinguish CD from EAS with 75% sensitivity (SE) and 75% specificity (SP), AUC 0.811; results were similar between HPLC-UV (SE 73%, SP 79%, AUC 0.708) and LC-MS/MS (SE 77%, SP 73%, AUC 0.834; P=.727). The diagnostic accuracy of FEr was similar to that of CRH test or high-dose dexamethasone suppression test (respectively P=.171 and P=.683), also combined. Finally, FEr was able to increase the number of correct diagnosis in patients with discordant dynamic tests. Urinary FEr >1.15 was able to suggest EAS, with a diagnostic accuracy similar to that of other dynamic tests proposed to study ACTH-dependent CS. © 2017 John Wiley & Sons Ltd.

  15. Analysis of instrumentation error effects on the identification accuracy of aircraft parameters

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.

    1972-01-01

    An analytical investigation is presented of the effect of unmodeled measurement system errors on the accuracy of aircraft stability and control derivatives identified from flight test data. Such error sources include biases, scale factor errors, instrument position errors, misalignments, and instrument dynamics. Two techniques (ensemble analysis and simulated data analysis) are formulated to determine the quantitative variations to the identified parameters resulting from the unmodeled instrumentation errors. The parameter accuracy that would result from flight tests of the F-4C aircraft with typical quality instrumentation is determined using these techniques. It is shown that unmodeled instrument errors can greatly increase the uncertainty in the value of the identified parameters. General recommendations are made of procedures to be followed to insure that the measurement system associated with identifying stability and control derivatives from flight test provides sufficient accuracy.

  16. A comparative analysis of the accuracy of implant transfer techniques.

    PubMed

    Hsu, C C; Millstein, P L; Stein, R S

    1993-06-01

    Four different implant transfer techniques using two master cast systems (solid cast and Zeiser system) were evaluated and compared with respect to the accuracy with which abutment positions were reproduced. A stainless steel experimental analogue with two anterior and two posterior fixtures and abutments was fabricated. Polyether impressions (14 each) were made by use of four techniques, (I) nonsplinted, (II) splinted with dental floss and acrylic resin, (III) splinted with orthodontic wire and acrylic resin, and (IV) splinted with acrylic resin alone. The fourteen impressions of each technique were divided into two equal groups: group 1, solid cast system, and group 2, Zeiser system. The abutments of each master cast were measured vertically and horizontally with a profile projector. Statistical analysis indicated no significant difference between the splinted and nonsplinted techniques. The Zeiser system provided more accurate interabutment relationships for the posterior region than the solid cast system.

  17. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    PubMed

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-02-06

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  18. Accuracy Analysis on Large Blocks of High Resolution Images

    NASA Technical Reports Server (NTRS)

    Passini, Richardo M.

    2007-01-01

    Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.

  19. Increasing the accuracy of proteomic typing by decellularisation of amyloid tissue biopsies.

    PubMed

    Mangione, P Patrizia; Mazza, Giuseppe; Gilbertson, Janet A; Rendell, Nigel B; Canetti, Diana; Giorgetti, Sofia; Frenguelli, Luca; Curti, Marco; Rezk, Tamer; Raimondi, Sara; Pepys, Mark B; Hawkins, Philip N; Gillmore, Julian D; Taylor, Graham W; Pinzani, Massimo; Bellotti, Vittorio

    2017-08-08

    Diagnosis and treatment of systemic amyloidosis depend on accurate identification of the specific amyloid fibril protein forming the tissue deposits. Confirmation of monoclonal immunoglobulin light chain amyloidosis (AL), requiring cytotoxic chemotherapy, and avoidance of such treatment in non-AL amyloidosis, are particularly important. Proteomic analysis characterises amyloid proteins directly. It complements immunohistochemical staining of amyloid to identify fibril proteins and gene sequencing to identify mutations in the fibril precursors. However, proteomics sometimes detects more than one potentially amyloidogenic protein, especially immunoglobulins and transthyretin which are abundant plasma proteins. Ambiguous results are most challenging in the elderly as both AL and transthyretin (ATTR) amyloidosis are usually present in this group. We have lately described a procedure for tissue decellularisation which retains the structure, integrity and composition of amyloid but removes proteins that are not integrated within the deposits. Here we show that use of this procedure before proteomic analysis eliminates ambiguity and improves diagnostic accuracy. Unequivocal identification of the protein causing amyloidosis disease is crucial for correct diagnosis and treatment. As a proof of principle, we selected a number of cardiac and fat tissue biopsies from patients with various types of amyloidosis and show that a classical procedure of decellularisation enhances the specificity of the identification of the culprit protein reducing ambiguity and the risk of misdiagnosis. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. A method of increasing test range and accuracy of bioindicators: Geobacillus stearothermophilus spores.

    PubMed

    Lundahl, Gunnel

    2003-01-01

    Spores of Geobacillus stearothermophilus are very sensitive to changes in temperature. When validating sterilizing processes, the most common bioindicator (BI) is spores of Geobacillus stearothermophilus ATCC12980 and ATCC7953 with about 10(6) spores /BI and a D121-value of about 2 minutes in water. Because these spores of Geobacillus stearothermophilus do not survive at a F0-value above 12 minutes, it has not been possible to evaluate the agreement between the biological F-value (F(BIO)) and physical measurements (time and temperature) when the physical F0-value exceeds that limit. However, it has been proven that glycerin substantially increases the heat resistance of the spores, and it is possible to utilize that property when manufacturing BIs suitable to use in processes with longer sterilization time or high temperature (above 121 degrees C). By the method described, it is possible to make use of the sensitivity and durability of Geobacillus stearothermophilus' spores when glycerin has increased both test range and accuracy. Experience from years of development and validation work with the use of the highly sensitive glycerin-water-spore-suspension sensor (GWS-sensor) is reported. Validation of the steam sterilization process at high temperature has been possible with the use of GWS-sensors. It has also been shown that the spores in suspension keep their characteristics for a period of 19 months when stored cold (8 degrees C).

  1. There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation

    ERIC Educational Resources Information Center

    McKinney, Tracy; Vasquez, Eleazar, III.

    2014-01-01

    Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…

  2. There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation

    ERIC Educational Resources Information Center

    McKinney, Tracy; Vasquez, Eleazar, III.

    2014-01-01

    Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…

  3. Lower limb pneumatic compression during dobutamine stress echocardiography in patients with normal resting wall motion: will it increase diagnostic accuracy?

    PubMed

    Abdel-Salam, Zainab; Allam, Lawra; Wadie, Bassem; Enany, Bassem; Nammas, Wail

    2015-01-01

    Pneumatic compression of the lower part of the body increases systemic vascular resistance and left ventricular afterload. We compared the diagnostic accuracy of dobutamine stress echocardiography (DSE) with pneumatic compression of the lower extremities, vs. standard DSE, for detection of angiographically significant coronary artery disease (CAD) in patients with normal baseline resting wall motion. We enrolled 70 consecutive patients with no resting wall motion abnormalities (WMA), who underwent DSE. DSE was repeated with pneumatic compression of the lower extremities three days after the initial standard DSE. A positive test was defined as the induction of WMA in at least two contiguous non-overlap segments at any stage of dobutamine infusion. Significant coronary stenosis was defined as ≥ 50% obstruction of ≥ 1 sizable artery by coronary angiography. The mean age of the study cohort was 54.7 ± 9.9 years; 55.7% were females. Thirty-eight (54.3%) patients had significant CAD. The mean test duration was 15.8 ± 5.1 min for standard DSE and 11.7 ± 4.1 min for DSE with pneumatic compression. Analysis of standard DSE revealed sensitivity, specificity, and positive and negative predictive values of 81.6%, 90.6%, 91.2%, and 80.6%, respectively; overall accuracy was 85.7%. Analysis of DSE with pneumatic compression revealed sensitivity, specificity, and positive and negative predictive values of 89.5%, 87.5%, 89.5%, and 87.5%, respectively; overall accuracy was 88.6%. In symptomatic patients with suspected CAD referred for evaluation by DSE, who have no resting wall motion abnormalities, pneumatic compression of the lower extremities during DSE improved the sensitivity but slightly reduced the specificity for detection of angiographically significant CAD, compared with standard DSE. Moreover, it reduced the test duration.

  4. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  5. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  6. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    EPA Science Inventory

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  8. Geographic stacking: Decision fusion to increase global land cover map accuracy

    NASA Astrophysics Data System (ADS)

    Clinton, Nicholas; Yu, Le; Gong, Peng

    2015-05-01

    Techniques to combine multiple classifier outputs is an established sub-discipline in data mining, referred to as "stacking," "ensemble classification," or "meta-learning." Here we describe how stacking of geographically allocated classifications can create a map composite of higher accuracy than any of the individual classifiers. We used both voting algorithms and trainable classifiers with a set of validation data to combine individual land cover maps. We describe the generality of this setup in terms of existing algorithms and accuracy assessment procedures. This method has the advantage of not requiring posterior probabilities or level of support for predicted class labels. We demonstrate the technique using Landsat based, 30-meter land cover maps, the highest resolution, globally available product of this kind. We used globally distributed validation samples to composite the maps and compute accuracy. We show that geographic stacking can improve individual map accuracy by up to 6.6%. The voting methods can also achieve higher accuracy than the best of the input classifications. Accuracies from different classifiers, input data, and output type are compared. The results are illustrated on a Landsat scene in California, USA. The compositing technique described here has broad applicability in remote sensing based map production and geographic classification.

  9. Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach

    NASA Astrophysics Data System (ADS)

    Xiao, T.

    2012-12-01

    One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.

  10. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  11. Increased-accuracy numerical modeling of electron-optical systems with space-charge

    NASA Astrophysics Data System (ADS)

    Sveshnikov, V.

    2011-07-01

    This paper presents a method for improving the accuracy of space-charge computation for electron-optical systems. The method proposes to divide the computational region into two parts: a near-cathode region in which analytical solutions are used and a basic one in which numerical methods compute the field distribution and trace electron ray paths. A numerical method is used for calculating the potential along the interface, which involves solving a non-linear equation. Preliminary results illustrating the improvement of accuracy and the convergence of the method for a simple test example are presented.

  12. Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2010-01-01

    While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…

  13. Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2010-01-01

    While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…

  14. Testing Delays Resulting in Increased Identification Accuracy in Line-Ups and Show-Ups.

    ERIC Educational Resources Information Center

    Dekle, Dawn J.

    1997-01-01

    Investigated time delays (immediate, two-three days, one week) between viewing a staged theft and attempting an eyewitness identification. Compared lineups to one-person showups in a laboratory analogue involving 412 subjects. Results show that across all time delays, participants maintained a higher identification accuracy with the showup…

  15. Accuracy of clinical techniques for evaluating lower limb sensorimotor functions associated with increased fall risk

    PubMed Central

    Donaghy, Alex; DeMott, Trina; Allet, Lara; Kim, Hogene; Ashton-Miller, James; Richardson, James K.

    2015-01-01

    Background In prior work laboratory-based measures of hip motor function and ankle proprioceptive precision were critical to maintaining unipedal stance and fall/fall-related injury risk. However, the optimal clinical evaluation techniques for predicting these measures are unknown. Objective To evaluate the diagnostic accuracy of common clinical maneuvers in predicting laboratory-based measures of frontal plane hip rate of torque development (HipRTD) and ankle proprioceptive thresholds (AnkPRO) associated with increased fall risk. Design Prospective, observational study. Setting Biomechanical research laboratory. Participants Forty-one older subjects (age 69.1 ± 8.3 years), 25 with varying degrees of diabetic distal symmetric polyneuropathy and 16 without. Assessments Clinical hip strength was evaluated by manual muscle testing (MMT) and lateral plank time (LPT), defined as the number seconds the laterally lying subject could lift hips from the support surface. Foot/ankle evaluation included Achilles reflex, and vibratory, proprioceptive, monofilament, and pinprick sensations at the great toe. Main Outcome Measures HipRTD, abduction and adduction, using a custom whole-body dynamometer. AnkPRO determined with subjects standing using a foot cradle system and a staircase series of 100 frontal plane rotational stimuli. Results Pearson correlation coefficients (r) and receiver operator characteristic (ROC) curves revealed that LPT correlated more strongly with HipRTD (r/p = .61/<.001 and .67/<.001, for abductor/adductor, respectively) than did hip abductor MMT (r/p = .31/.044). Subjects with greater vibratory and proprioceptive sensation, and intact Achilles reflexes, monofilament, and pin sensation had more precise AnkPRO. LPT of < 12 seconds yielded a sensitivity/specificity of 91%/80% for identifying HipRTD < .25 (body size in Newton-meters), and vibratory perception of < 8 seconds yielded a sensitivity/specificity of 94%/80% for the identification of AnkPRO > 1

  16. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  17. Consistency of accuracy assessment indices for soft classification: Simulation analysis

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Zhu, Xiaolin; Imura, Hidefumi; Chen, Xuehong

    Accuracy assessment plays a crucial role in the implementation of soft classification. Even though many indices of accuracy assessment for soft classification have been proposed, the consistencies among these indices are not clear, and the impact of sample size on these consistencies has not been investigated. This paper examines two kinds of indices: map-level indices, including root mean square error ( rmse), kappa, and overall accuracy ( oa) from the sub-pixel confusion matrix (SCM); and category-level indices, including crmse, user accuracy ( ua) and producer accuracy ( pa). A careful simulation was conducted to investigate the consistency of these indices and the effect of sample size. The major findings were as follows: (1) The map-level indices are highly consistent with each other, whereas the category-level indices are not. (2) The consistency among map-level and category-level indices becomes weaker when the sample size decreases. (3) The rmse is more affected by error distribution among classes than are kappa and oa. Based on these results, we recommend that rmse can be used for map-level accuracy due to its simplicity, although kappa and oa may be better alternatives when the sample size is limited because the two indices are affected less by the error distribution among classes. We also suggest that crmse should be provided when map users are not concerned about the error source, whereas ua and pa are more useful when the complete information about different errors is required. The results of this study will be of benefit to the development and application of soft classifiers.

  18. Accuracy analysis in MRI-guided robotic prostate biopsy.

    PubMed

    Xu, Helen; Lasso, Andras; Guion, Peter; Krieger, Axel; Kaushal, Aradhana; Singh, Anurag K; Pinto, Peter A; Coleman, Jonathan; Grubb, Robert L; Lattouf, Jean-Baptiste; Menard, Cynthia; Whitcomb, Louis L; Fichtinger, Gabor

    2013-11-01

    To assess retrospectively the clinical accuracy of an magnetic resonance imaging-guided robotic prostate biopsy system that has been used in the US National Cancer Institute for over 6 years. Series of 2D transverse volumetric MR image slices of the prostate both pre (high-resolution T2-weighted)- and post (low-resolution)- needle insertions were used to evaluate biopsy accuracy. A three-stage registration algorithm consisting of an initial two-step rigid registration followed by a B-spline deformable alignment was developed to capture prostate motion during biopsy. The target displacement (distance between planned and actual biopsy target), needle placement error (distance from planned biopsy target to needle trajectory), and biopsy error (distance from actual biopsy target to needle trajectory) were calculated as accuracy assessment. A total of 90 biopsies from 24 patients were studied. The registrations were validated by checking prostate contour alignment using image overlay, and the results were accurate to within 2 mm. The mean target displacement, needle placement error, and clinical biopsy error were 5.2, 2.5, and 4.3 mm, respectively. The biopsy error reported suggests that quantitative imaging techniques for prostate registration and motion compensation may improve prostate biopsy targeting accuracy.

  19. Clinical Judgment Accuracy: From Meta-Analysis to Metatheory

    ERIC Educational Resources Information Center

    Ridley, Charles R.; Shaw-Ridley, Mary

    2009-01-01

    Clinical judgment is foundational to psychological practice. Accurate judgment forms the basis for establishing reasonable goals and selecting appropriate treatments, which in turn are essential in achieving positive therapeutic outcomes. Therefore, Spengler and colleagues' meta-analytic finding--clinical judgment accuracy improves marginally with…

  20. Aptitude-Achievement Discrepancy Scores: Accuracy in Analysis Ignored.

    ERIC Educational Resources Information Center

    Ross, Roslyn P.

    1992-01-01

    Reacts to response of Barnett and Macmann concerning Ross's investigation into whether psychologists possess psychometric knowledge necessary to evaluate discrepancies within individual's test performace with sufficient accuracy to ensure reliable decision making. Contends that ways need to be found to implement and evaluate promising…

  1. Design and analysis for thematic map accuracy assessment: Fundamental principles

    Treesearch

    Stephen V. Stehman; Raymond L. Czaplewski

    1998-01-01

    Land-cover maps are used in numerous natural resource applications to describe the spatial distribution and pattern of land-cover, to estimate areal extent of various cover classes, or as input into habitat suitability models, land-cover change analyses, hydrological models, and risk analyses. Accuracy assessment quantifies data quality so that map users may evaluate...

  2. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  3. Cerebellar Arteriovenous Malformations: Anatomical Subtypes, Surgical Results, and Increased Predictive Accuracy of the Supplementary Grading System

    PubMed Central

    Rodríguez-Hernández, Ana; Kim, Helen; Pourmohamad, Tony; Young, William L.; Lawton, Michael T.

    2013-01-01

    Background Anatomical diversity amongst cerebellar AVMs calls for a classification that is intuitive and surgically informative. Selection tools like the Spetzler-Martin grading system are designed to work best with cerebral AVMs, but have shortcomings with cerebellar AVMs. Objective To define subtypes of cerebellar AVMs that clarify anatomy and surgical management, determine results according to subtypes, and compare predictive accuracies of Spetzler-Martin and supplementary systems. Methods From a consecutive surgical series of 500 patients, 60 had cerebellar AVMs, 39 had brain stem AVMs and were excluded, and 401 had cerebral AVMs. Results Cerebellar AVM subtypes were: 18 vermian, 13 suboccipital, 12 tentorial, 12 petrosal, and 5 tonsillar. Patients with tonsillar and tentorial AVMs fared best. Cerebellar AVMs presented with hemorrhage more than cerebral AVMs (p<0.001). Cerebellar AVMs were more likely to drain deep (p=0.036) and less likely eloquent (p<0.001). The predictive accuracy of supplementary grade was better than that of Spetzler-Martin grade with cerebellar AVMs (areas under the ROC curve 0.74 and 0.59, respectively). The predictive accuracy of the supplementary system was consistent for cerebral and cerebellar AVMs, whereas that of the Spetzler-Martin system was greater with cerebral AVMs. Conclusion Patients with cerebellar AVMs present with hemorrhage more than patients with cerebral AVMs, justifying an aggressive treatment posture. The supplementary system is better than the Spetzler-Martin system at predicting outcomes after cerebellar AVM resection. Key components of the Spetzler-Martin system, like venous drainage and eloquence, are distorted by cerebellar anatomy in ways that components of the supplementary system are not. PMID:22986595

  4. Cerebellar arteriovenous malformations: anatomic subtypes, surgical results, and increased predictive accuracy of the supplementary grading system.

    PubMed

    Rodríguez-Hernández, Ana; Kim, Helen; Pourmohamad, Tony; Young, William L; Lawton, Michael T

    2012-12-01

    Anatomic diversity among cerebellar arteriovenous malformations (AVMs) calls for a classification that is intuitive and surgically informative. Selection tools like the Spetzler-Martin grading system are designed to work best with cerebral AVMs but have shortcomings with cerebellar AVMs. To define subtypes of cerebellar AVMs that clarify anatomy and surgical management, to determine results according to subtypes, and to compare predictive accuracies of the Spetzler-Martin and supplementary systems. From a consecutive surgical series of 500 patients, 60 had cerebellar AVMs, 39 had brainstem AVMs and were excluded, and 401 had cerebral AVMs. Cerebellar AVM subtypes were as follows: 18 vermian, 13 suboccipital, 12 tentorial, 12 petrosal, and 5 tonsillar. Patients with tonsillar and tentorial AVMs fared best. Cerebellar AVMs presented with hemorrhage more than cerebral AVMs (P < .001). Cerebellar AVMs were more likely to drain deep (P = .04) and less likely to be eloquent (P < .001). The predictive accuracy of the supplementary grade was better than that of the Spetzler-Martin grade with cerebellar AVMs (areas under the receiver-operating characteristic curve, 0.74 and 0.59, respectively). The predictive accuracy of the supplementary system was consistent for cerebral and cerebellar AVMs, whereas that of the Spetzler-Martin system was greater with cerebral AVMs. Patients with cerebellar AVMs present with hemorrhage more often than patients with cerebral AVMs, justifying an aggressive treatment posture. The supplementary system is better than the Spetzler-Martin system at predicting outcomes after cerebellar AVM resection. Key components of the Spetzler-Martin system such as venous drainage and eloquence are distorted by cerebellar anatomy in ways that components of the supplementary system are not.

  5. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  6. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    SciTech Connect

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-17

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  7. Dual specimens increase the diagnostic accuracy and reduce the reaction duration of rapid urease test

    PubMed Central

    Hsu, Wen-Hung; Wang, Sophie SW; Kuo, Chao-Hung; Chen, Chiao-Yun; Chang, Ching-Wen; Hu, Huang-Ming; Wang, Jaw-Yuan; Yang, Yuan-Chieh; Lin, Yu-Chun; Wang, Wen-Ming; Wu, Deng-Chyang; Wu, Ming-Tsang; Kuo, Fu-Chen

    2010-01-01

    AIM: To evaluate the influence of multiple samplings during esophagogastroduodenoscopy (EGD) on the accuracy of the rapid urease test, and the validity of newly developed rapid urease tests, HelicotecUT plus test and HelicotecUT test, CLO test and ProntoDry test. METHODS: A total of 355 patients undergoing EGD for dyspepsia were included. Their Helicobacter pylori (H. pylori) treatment status was either naïve or eradicated. Six biopsy specimens from antrum and gastric body, respectively, were obtained during EGD. Single antral specimens and dual (antrum + body) specimens were compared. Infection status of H. pylori was evaluated by three different tests: culture, histology, and four different commercially available rapid urease tests (RUTs)-including the newly developed HelicotecUT plus test and HelicotecUT test, and established CLO test and ProntoDry test. H. pylori status was defined as positive when the culture was positive or if there were concordant positive results among histology, CLO test and ProntoDry test. RESULTS: When dual specimens were applied, sensitivity was enhanced and RUT reaction time was significantly reduced, regardless of their treatment status. Thirty minutes were enough to achieve an agreeable positive rate in all the RUTs. Both newly developed RUTs showed comparable sensitivity, specificity and accuracy to the established RUTs, regardless of patient treatment status, RUT reaction duration, and EGD biopsy sites. CONCLUSION: Combination of antrum and body biopsy specimens greatly enhances the sensitivity of rapid urease test and reduces the reaction duration to 30 min. PMID:20556840

  8. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    PubMed Central

    Hong, Keum-Shik; Khan, Muhammad Jawad

    2017-01-01

    In this article, non-invasive hybrid brain–computer interface (hBCI) technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG), due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS), electromyography (EMG), electrooculography (EOG), and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features) relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI) accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP) and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided. PMID:28790910

  9. Accuracy of self-reported cancer-screening histories: a meta-analysis.

    PubMed

    Rauscher, Garth H; Johnson, Timothy P; Cho, Young Ik; Walk, Jennifer A

    2008-04-01

    Survey data used to study trends in cancer screening may overestimate screening utilization while potentially underestimating existing disparities in use. We did a literature review and meta-analysis of validation studies examining the accuracy of self-reported cancer-screening histories. We calculated summary random-effects estimates for sensitivity and specificity, separately for mammography, clinical breast exam (CBE), Pap smear, prostate-specific antigen testing (PSA), digital rectal exam, fecal occult blood testing, and colorectal endoscopy. Sensitivity was highest for mammogram, CBE, and Pap smear (0.95, 0.94, and 0.93, respectively) and lowest for PSA and digital rectal exam histories (0.71 and 0.75). Specificity was highest for endoscopy, fecal occult blood testing, and PSA (0.90, 0.78, and 0.73, respectively) and lowest for CBE, Pap smear, and mammogram histories (0.26, 0.48, and 0.61, respectively). Sensitivity and specificity summary estimates tended to be lower in predominantly Black and Hispanic samples compared with predominantly White samples. When estimates of self-report accuracy from this meta-analysis were applied to cancer-screening prevalence estimates from the National Health Interview Survey, results suggested that prevalence estimates are artificially increased and disparities in prevalence are artificially decreased by inaccurate self-reports. National survey data are overestimating cancer-screening utilization for several common procedures and may be masking disparities in screening due to racial/ethnic differences in reporting accuracy.

  10. Increasing the accuracy of peanut allergy diagnosis by using Ara h 2.

    PubMed

    Dang, Thanh D; Tang, Mimi; Choo, Sharon; Licciardi, Paul V; Koplin, Jennifer J; Martin, Pamela E; Tan, Tina; Gurrin, Lyle C; Ponsonby, Anne-Louise; Tey, Dean; Robinson, Marnie; Dharmage, Shyamali C; Allen, Katrina J

    2012-04-01

    Measurement of whole peanut-specific IgE (sIgE) is often used to confirm sensitization but does not reliably predict allergy. Ara h 2 is the dominant peanut allergen detected in 90% to 100% of patients with peanut allergy and could help improve diagnosis. We sought to determine whether Ara h 2 testing might improve the accuracy of diagnosing peanut allergy and therefore circumvent the need for an oral food challenge (OFC). Infants from the population-based HealthNuts study underwent skin prick tests to determine peanut sensitization and subsequently underwent a peanut OFC to confirm allergy status. In a stratified random sample of 200 infants (100 with peanut allergy and 100 with peanut tolerance), whole peanut sIgE and Ara h 2 sIgE levels were quantified by using fluorescence enzyme immunoassay. By using the previously published 95% positive predictive value of 15 kU(A)/L for whole peanut sIgE, a corresponding specificity of 98% (95% CI, 93% to 100%) was found in this study cohort. At the equivalent specificity of 98%, the sensitivity of Ara h 2 sIgE is 60% (95% CI, 50% to 70%), correctly identifying 60% of subjects with true peanut allergy compared with only 26% correctly identified by using whole peanut sIgE. We report that when using a combined approach of plasma sIgE testing for whole peanut followed by Ara h 2 for the diagnosis of peanut allergy, the number of OFCs required is reduced by almost two thirds. Ara h 2 plasma sIgE test levels provide higher diagnostic accuracy than whole peanut plasma sIgE levels and could be considered a new diagnostic tool to distinguish peanut allergy from peanut tolerance, which might reduce the need for an OFC. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  11. Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting

    PubMed Central

    Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice

    2016-01-01

    Background In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). Methods This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists’ interviews). Results The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8–94.3] versus 24.6% [CI95%15.3–34.4] concordance rates (p<0.01). Conclusion Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients’ drug safety. PMID:26999743

  12. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  13. Combining cow and bull reference populations to increase accuracy of genomic prediction and genome-wide association studies.

    PubMed

    Calus, M P L; de Haas, Y; Veerkamp, R F

    2013-10-01

    Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction

  14. Increasing accuracy and throughput in large-scale microsatellite fingerprinting of cacao field germplasm collections

    USDA-ARS?s Scientific Manuscript database

    Microsatellite-based DNA fingerprinting has been increasingly applied in crop genebank management. However, efficiency and cost-saving remain a major challenge for large scale genotyping, even when middle or high throughput genotyping facility is available. In this study we report on increasing the...

  15. Increased accuracy of batch fecundity estimates using oocyte stage ratios in Plectropomus leopardus.

    PubMed

    Carter, A B; Williams, A J; Russ, G R

    2009-08-01

    Using the ratio of the number of migratory nuclei to hydrated oocytes to estimate batch fecundity of common coral trout Plectropomus leopardus increases the time over which samples can be collected and, therefore, increases the sample size available and reduces biases in batch fecundity estimates.

  16. Increasing accuracy of daily evapotranspiration through synergistic use of MSG and MERIS/AATSR

    NASA Astrophysics Data System (ADS)

    Timmermans, Joris; van der Tol, Christiaan; Su, Zhongbo

    2010-05-01

    Daily Evapotranspiration estimates are important in many applications. Evapotranspiration plays a significant role in the water, energy and carbon cycles. Through these cycles evapotranspiration is important for monitoring droughts, managing agricultural irrigation, and weather forecast modeling. Drought levels and irrigation needs can be calculated from evapotranspiration because evapotranspiration estimates give a direct indication on the health and growth rate of crops. The evaporation of the soil and open water bodies and transpiration from plants combine as a lower forcing boundary parameter to the atmosphere affecting local and regional weather patterns. Evapotranspiration can be estimated using different techniques: ground measurements, hydrological modeling, and remote sensing algorithms. The first two techniques are not suitable for large scale estimation of evapotranspiration. Ground measurements are only valid within a small footprint area; and hydrological modelling requires intensive knowledge of a too large amount of processes. The advantage of remote sensing algorithms is that they are capable of estimating the evapotranspiration over large scales with a limited amount of parameters. In remote sensing a trade off exists between temporal and spatial resolution. Geostationary satellites have high temporal resolution but have a low spatial resolution, where near-Polar Orbiting satellites have high spatial resolution but have low temporal resolution. For example the SEVIRI sensor on the Meteosat Second Generation (MSG) satellite acquires images every 15 minutes with a resolution of 3km, where the AATSR/MERIS combination of the ENVISAT satellite has a revisit time of several days with a 1km resolution. Combining the advantages of geostationary satellites and polar-orbiting satellites will greatly improve the accuracy of the daily evapotranspiration estimates. Estimating daily evapotranspiration from near-polar orbiting satellites requires a method to

  17. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  18. A structured interview guide for global impressions: increasing reliability and scoring accuracy for CNS trials

    PubMed Central

    2013-01-01

    Background The clinical global impression of severity (CGI-S) scale is a frequently used rating instrument for the assessment of global severity of illness in Central Nervous System (CNS) trials. Although scoring guidelines have been proposed to anchor these scores, the collection of sufficient documentation to support the derived score is not part of any standardized interview procedure. It is self evident that the absence of a standardized, documentary format can affect inter-rater reliability and may adversely affect the accuracy of the resulting data. Method We developed a structured interview guide for global impressions (SIGGI) and evaluated the instrument in a 2-visit study of ambulatory patients with Major Depressive Disorder (MDD) or schizophrenia. Blinded, site-independent raters listened to audio recorded SIGGI interviews administered by site-based CGI raters. We compared SIGGI-derived CGI-S scores between the two separate site-based raters and the site-independent raters. Results We found significant intraclass correlations (p = 0.001) on all SIGGI-derived CGI-S scores between two separate site-based CGI raters with each other (r = 0.768) and with a blinded, site-independent rater (r = 0.748 and r = 0.706 respectively) and significant Pearson’s correlations between CGI-S scores with all MADRS validity comparisons for MDD and PANSS comparisons for schizophrenia (p- 0.001 in all cases). Compared to site-based raters, the site-independent raters gave identical “dual” CGI-S scores to 67.6% and 68.2% of subjects at visit 1 and 77.1% at visit 2. Conclusion We suggest that the SIGGI may improve the inter-rater reliability and scoring precision of the CGI-S and have broad applicability in CNS clinical trials. PMID:23369692

  19. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  20. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  1. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  2. Analysis of factors influencing the accuracy of CRDInSAR

    NASA Astrophysics Data System (ADS)

    Fu, Wenxue; Guo, Huadong; Tian, Qingjiu; Guo, Xiaofang

    2010-11-01

    In recent years, the method of Corner Reflectors Differential Interferometric Synthetic Aperture Radar (CRDInSAR) was proposed for overcoming the limitations of decorrelations of the conventional differential interferometric synthetic aperture radar (DInSAR) technique. In general, the corner reflector has very high RCS (Radar Cross Section) for a small size, and the maximum RCS occurs when it points directly along the boresight of the SAR antenna. The beam width of a trihedral corner reflector is rather broad (having a 3dB beam width of 40° in both elevation and azimuth), so it is fairly tolerant to installation errors. It can be made available as artificial PS (Permanent Scatterers) points by installing them on a study area due to the stable amplitude and phase performance. However, some errors of CRDInSAR system will still affect the results of measurement. In this paper, the factors influence the accuracy of CRDInSAR are discussed, which include the errors of baseline and its angle, look angle and height of corner reflector respectively.

  3. Analysis of factors influencing the accuracy of CRDInSAR

    NASA Astrophysics Data System (ADS)

    Fu, Wenxue; Guo, Huadong; Tian, Qingjiu; Guo, Xiaofang

    2009-09-01

    In recent years, the method of Corner Reflectors Differential Interferometric Synthetic Aperture Radar (CRDInSAR) was proposed for overcoming the limitations of decorrelations of the conventional differential interferometric synthetic aperture radar (DInSAR) technique. In general, the corner reflector has very high RCS (Radar Cross Section) for a small size, and the maximum RCS occurs when it points directly along the boresight of the SAR antenna. The beam width of a trihedral corner reflector is rather broad (having a 3dB beam width of 40° in both elevation and azimuth), so it is fairly tolerant to installation errors. It can be made available as artificial PS (Permanent Scatterers) points by installing them on a study area due to the stable amplitude and phase performance. However, some errors of CRDInSAR system will still affect the results of measurement. In this paper, the factors influence the accuracy of CRDInSAR are discussed, which include the errors of baseline and its angle, look angle and height of corner reflector respectively.

  4. Increased Accuracy in the Measurement of the Dielectric Constant of Seawater at 1.413 GHz

    NASA Technical Reports Server (NTRS)

    Zhou, Y.; Lang R.; Drego, C.; Utku, C.; LeVine, D.

    2012-01-01

    This paper describes the latest results for the measurements of the dielectric constant at 1.413 GHz by using a resonant cavity technique. The purpose of these measurements is to develop an accurate relationship for the dependence of the dielectric constant of sea water on temperature and salinity which is needed by the Aquarius inversion algorithm to retrieve salinity. Aquarius is the major instrument on the Aquarius/SAC-D observatory, a NASA/CONAE satellite mission launched in June of20ll with the primary mission of measuring global sea surface salinity to an accuracy of 0.2 psu. Aquarius measures salinity with a 1.413 GHz radiometer and uses a scatterometer to compensate for the effects of surface roughness. The core part of the seawater dielectric constant measurement system is a brass microwave cavity that is resonant at 1.413 GHz. The seawater is introduced into the cavity through a capillary glass tube having an inner diameter of 0.1 mm. The change of resonance frequency and the cavity Q value are used to determine the real and imaginary parts of the dielectric constant of seawater introduced into the thin tube. Measurements are automated with the help of software developed at the George Washington University. In this talk, new results from measurements made since September 2010 will be presented for salinities 30, 35 and 38 psu with a temperature range of O C to 350 C in intervals of 5 C. These measurements are more accurate than earlier measurements made in 2008 because of a new method for measuring the calibration constant using methanol. In addition, the variance of repeated seawater measurements has been reduced by letting the system stabilize overnight between temperature changes. The new results are compared to the Kline Swift and Meissner Wentz model functions. The importance of an accurate model function will be illustrated by using these model functions to invert the Aquarius brightness temperature to get the salinity values. The salinity values

  5. VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data

    DTIC Science & Technology

    2014-11-01

    Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data Why GAO Did This Study In 2013, VA estimated that about 1.5 million...who die by suicide each day were receiving care through VA. Mental health treatment includes services for depression —a mood disorder that causes a...include substance use disorder, physical impairments, previous suicide attempts, and depression . Additionally, life stressors, such as marital or

  6. Using Meta-Analysis to Inform the Design of Subsequent Studies of Diagnostic Test Accuracy

    ERIC Educational Resources Information Center

    Hinchliffe, Sally R.; Crowther, Michael J.; Phillips, Robert S.; Sutton, Alex J.

    2013-01-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial…

  7. Coupled Loads Analysis Accuracy from the Space Vehicle Perspective

    NASA Astrophysics Data System (ADS)

    Dickens, J. M.; Wittbrodt, M. J.; Gate, M. M.; Li, L. H.; Stroeve, A.

    2001-01-01

    Coupled loads analysis (CLA) consists of performing a structural response analysis, usually a time-history response analysis, with reduced dynamic models typically provided by two different companies to obtain the coupled response of a launch vehicle and space vehicle to the launching and staging events required to place the space vehicle into orbit. The CLA is performed by the launch vehicle contractor with a reduced dynamics mathematical model that is coupled to the launch vehicle, or booster, model to determine the coupled loads for each substructure. Recently, the booster and space vehicle contractors have been from different countries. Due to the language differences and governmental restrictions, the verification of the CLA is much more difficult than when working with launch vehicle and space vehicle contractors of the same country. This becomes exceedingly clear when the CLA analysis results do not seem to pass an intuitive judgement. Presented in the sequel are three checks that a space vehicle contractor can perform on the results of a coupled loads analysis to partially verify the analysis.

  8. Joint Modelling of Confounding Factors and Prominent Genetic Regulators Provides Increased Accuracy in Genetical Genomics Studies

    PubMed Central

    Lawrence, Neil D.

    2012-01-01

    Expression quantitative trait loci (eQTL) studies are an integral tool to investigate the genetic component of gene expression variation. A major challenge in the analysis of such studies are hidden confounding factors, such as unobserved covariates or unknown subtle environmental perturbations. These factors can induce a pronounced artifactual correlation structure in the expression profiles, which may create spurious false associations or mask real genetic association signals. Here, we report PANAMA (Probabilistic ANAlysis of genoMic dAta), a novel probabilistic model to account for confounding factors within an eQTL analysis. In contrast to previous methods, PANAMA learns hidden factors jointly with the effect of prominent genetic regulators. As a result, this new model can more accurately distinguish true genetic association signals from confounding variation. We applied our model and compared it to existing methods on different datasets and biological systems. PANAMA consistently performs better than alternative methods, and finds in particular substantially more trans regulators. Importantly, our approach not only identifies a greater number of associations, but also yields hits that are biologically more plausible and can be better reproduced between independent studies. A software implementation of PANAMA is freely available online at http://ml.sheffield.ac.uk/qtl/. PMID:22241974

  9. Increasing the accuracy and automation of fractional vegetation cover estimation from digital photographs

    USDA-ARS?s Scientific Manuscript database

    The use of automated methods to estimate canopy cover (CC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive CC measurements. Wide acceptance has been delayed because of the limitations of these methods. This work introduces a novel ...

  10. Increasing the accuracy of radiotracer monitoring in one-dimensional flow using polynomial deconvolution correction.

    PubMed

    Gholipour Peyvandi, Reza; Taheri, Ali

    2016-01-01

    Factors such as type of fluid movement and gamma-ray scattering may decrease the precision of the radiotracer monitoring as the response to a short tracer injection. Practical experiences using polynomial deconvolution techniques are presented. These techniques were successfully applied for correcting the obtained experimental results and increasing the time resolution of the method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    ERIC Educational Resources Information Center

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  12. Repeating a Monologue under Increasing Time Pressure: Effects on Fluency, Complexity, and Accuracy

    ERIC Educational Resources Information Center

    Thai, Chau; Boers, Frank

    2016-01-01

    Studies have shown that learners' task performance improves when they have the opportunity to repeat the task. Conditions for task repetition vary, however. In the 4/3/2 activity, learners repeat a monologue under increasing time pressure. The purpose is to foster fluency, but it has been suggested in the literature that it also benefits other…

  13. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    ERIC Educational Resources Information Center

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  14. Repeating a Monologue under Increasing Time Pressure: Effects on Fluency, Complexity, and Accuracy

    ERIC Educational Resources Information Center

    Thai, Chau; Boers, Frank

    2016-01-01

    Studies have shown that learners' task performance improves when they have the opportunity to repeat the task. Conditions for task repetition vary, however. In the 4/3/2 activity, learners repeat a monologue under increasing time pressure. The purpose is to foster fluency, but it has been suggested in the literature that it also benefits other…

  15. The urine dipstick test useful to rule out infections. A meta-analysis of the accuracy

    PubMed Central

    Devillé, Walter LJM; Yzermans, Joris C; van Duijn, Nico P; Bezemer, P Dick; van der Windt, Daniëlle AWM; Bouter, Lex M

    2004-01-01

    Background Many studies have evaluated the accuracy of dipstick tests as rapid detectors of bacteriuria and urinary tract infections (UTI). The lack of an adequate explanation for the heterogeneity of the dipstick accuracy stimulates an ongoing debate. The objective of the present meta-analysis was to summarise the available evidence on the diagnostic accuracy of the urine dipstick test, taking into account various pre-defined potential sources of heterogeneity. Methods Literature from 1990 through 1999 was searched in Medline and Embase, and by reference tracking. Selected publications should be concerned with the diagnosis of bacteriuria or urinary tract infections, investigate the use of dipstick tests for nitrites and/or leukocyte esterase, and present empirical data. A checklist was used to assess methodological quality. Results 70 publications were included. Accuracy of nitrites was high in pregnant women (Diagnostic Odds Ratio = 165) and elderly people (DOR = 108). Positive predictive values were ≥80% in elderly and in family medicine. Accuracy of leukocyte-esterase was high in studies in urology patients (DOR = 276). Sensitivities were highest in family medicine (86%). Negative predictive values were high in both tests in all patient groups and settings, except for in family medicine. The combination of both test results showed an important increase in sensitivity. Accuracy was high in studies in urology patients (DOR = 52), in children (DOR = 46), and if clinical information was present (DOR = 28). Sensitivity was highest in studies carried out in family medicine (90%). Predictive values of combinations of positive test results were low in all other situations. Conclusions Overall, this review demonstrates that the urine dipstick test alone seems to be useful in all populations to exclude the presence of infection if the results of both nitrites and leukocyte-esterase are negative. Sensitivities of the combination of both tests vary between 68 and 88% in

  16. Tourniquet Test for Dengue Diagnosis: Systematic Review and Meta-analysis of Diagnostic Test Accuracy

    PubMed Central

    Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C.

    2016-01-01

    Background Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). Purpose To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. Data Sources A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Study Selection Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Data Extraction Two independent authors extracted data using a standardized form. Data Synthesis A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66–0.74). Conclusion The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. Registration The protocol for this systematic review was registered at PROSPERO: CRD42015020323. PMID:27486661

  17. Guidance markers increase the accuracy of simulated ultrasound-guided vascular access: an observational cohort study in a phantom.

    PubMed

    Thorn, Sofie; Aagaard Hansen, Marlene; Sloth, Erik; Knudsen, Lars

    2017-01-18

    Peripheral ultrasound (US)-guided vascular access is gaining popularity. Though studies have demonstrated that US-guided vascular access has several advantages, the procedure is challenging to even the most experienced operator. The aim of this observational cohort study was to investigate whether adding guidance markers on a US system would increase the accuracy of US-guided needle tip placement compared to no guidance markers. A total of 18 physicians and 12 nurses familiar with US-guided vascular access volunteered to participate. Two identical US systems were used. System A was as manufactured. System B included three guide markers drawn on the transducer and screen. The participants performed six needle insertions in a gelatin phantom with three imbedded targets. First participants used US system A and then US system B. Primary endpoint was horizontal distance between needle tip and target. Secondary endpoint was participant's subjective feeling of advantage of the guidance markers measured on a Likert scale. Guidance markers on the US system significantly increased the accuracy of needle placement on all three targets individually (p = 0.00) and on overall placement, (inter-quartile range 3.21 mm vs. 0.49 mm, p = 0.00). In addition, the use of guidance markers eliminated the difference in accuracy between physicians and nurses, respectively. All participants evaluated the guidance markers to be helpful during the needle insertions. Adding guidance markers to the US system significantly increased the accuracy of needle placement in the horizontal plane during simulated US-guided vascular access using a phantom.

  18. Corner-corrected diagonal-norm summation-by-parts operators for the first derivative with increased order of accuracy

    NASA Astrophysics Data System (ADS)

    Del Rey Fernández, David C.; Boom, Pieter D.; Zingg, David W.

    2017-02-01

    Combined with simultaneous approximation terms, summation-by-parts (SBP) operators offer a versatile and efficient methodology that leads to consistent, conservative, and provably stable discretizations. However, diagonal-norm operators with a repeating interior-point operator that have thus far been constructed suffer from a loss of accuracy. While on the interior, these operators are of degree 2p, at a number of nodes near the boundaries, they are of degree p, and therefore of global degree p - meaning the highest degree monomial for which the operators are exact at all nodes. This implies that for hyperbolic problems and operators of degree greater than unity they lead to solutions with a global order of accuracy lower than the degree of the interior-point operator. In this paper, we develop a procedure to construct diagonal-norm first-derivative SBP operators that are of degree 2p at all nodes and therefore can lead to solutions of hyperbolic problems of order 2 p + 1. This is accomplished by adding nonzero entries in the upper-right and lower-left corners of SBP operator matrices with a repeating interior-point operator. This modification necessitates treating these new operators as elements, where mesh refinement is accomplished by increasing the number of elements in the mesh rather than increasing the number of nodes. The significant improvements in accuracy of this new family, for the same repeating interior-point operator, are demonstrated in the context of the linear convection equation.

  19. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    PubMed

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-10-29

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy.

  20. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID) Tracking Accuracy with the Kriging Method

    PubMed Central

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  1. Increased ephemeris accuracy using attitude-dependent aerodynamic force coefficients for inertially stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Baker, David F.

    1991-01-01

    The FREEMAC program used to generate the aerodynamic coefficients, as well as associated routines that allow the results to be used in other software is described. These capabilities are applied in two numerical examples to the short-term orbit prediction of the Gamma Ray Observatory (GRO) and Hubble Space Telescope (HST) spacecraft. Predictions using attitude-dependent aerodynamic coefficients were made on a modified version of the PC-based Ephemeris Generation Program (EPHGEN) and were compared to definitive orbit solutions obtained from actual tracking data. The numerical results show improvement in the predicted semi-major axis and along-track positions that would seem to be worth the added computational effort. Finally, other orbit and attitude analysis applications are noted that could profit from using FREEMAC-calculated aerodynamic coefficients, including orbital lifetime studies, orbit determination methods, attitude dynamics simulators, and spacecraft control system component sizing.

  2. Tissue probability map constrained CLASSIC for increased accuracy and robustness in serial image segmentation

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Shen, Dinggang; Wong, Stephen T. C.

    2009-02-01

    Traditional fuzzy clustering algorithms have been successfully applied in MR image segmentation for quantitative morphological analysis. However, the clustering results might be biased due to the variability of tissue intensities and anatomical structures. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serialMR brain image segmentation for longitudinal study of human brains. The tissue probability maps consist of segmentation priors obtained from a population and reflect the probability of different tissue types. More accurate image segmentation can be achieved by using these segmentation priors in the clustering algorithm. Experimental results of both simulated longitudinal MR brain data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using the new serial image segmentation algorithm in the framework of CLASSIC show more accurate and robust longitudinal measures.

  3. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  4. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation.

    PubMed

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system.

  5. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation

    PubMed Central

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system. PMID:28269955

  6. Increase of Readability and Accuracy of 3d Models Using Fusion of Close Range Photogrammetry and Laser Scanning

    NASA Astrophysics Data System (ADS)

    Gašparović, M.; Malarić, I.

    2012-07-01

    The development of laser scanning technology has opened a new page in geodesy and enabled an entirely new way of presenting data. Products obtained by the method of laser scanning are used in many sciences, as well as in archaeology. It should be noted that 3D models of archaeological artefacts obtained by laser scanning are fully measurable, written in 1:1 scale and have high accuracy. On the other hand, texture and RGB values of the surface of the object obtained by a laser scanner have lower resolution and poorer radiometric characteristics in relation to the textures captured with a digital camera. Scientific research and the goal of this paper are to increase the accuracy and readability of the 3D model with textures obtained with a digital camera. Laser scanning was performed with triangulation scanner of high accuracy, Vivid 9i (Konica Minolta), while for photogrammetric recording digital camera Nikon D90 with a lens of fixed focal length 20 mm, was used. It is important to stress that a posteriori accuracy score of the global registration of point clouds in the form of the standard deviation was ± 0.136 mm while the average distance was only ± 0.080 mm. Also research has proven that the quality projection texture model increases readability. Recording of archaeological artefacts and making their photorealistic 3D model greatly contributes to archaeology as a science, accelerates processing and reconstruction of the findings. It also allows the presentation of findings to the general public, not just to the experts.

  7. Increasing nursing students' understanding and accuracy with medical dose calculations: A collaborative approach.

    PubMed

    Mackie, Jane E; Bruce, Catherine D

    2016-05-01

    Accurate calculation of medication dosages can be challenging for nursing students. Specific interventions related to types of errors made by nursing students may improve the learning of this important skill. The objective of this study was to determine areas of challenge for students in performing medication dosage calculations in order to design interventions to improve this skill. Strengths and weaknesses in the teaching and learning of medication dosage calculations were assessed. These data were used to create online interventions which were then measured for the impact on student ability to perform medication dosage calculations. The setting of the study is one university in Canada. The qualitative research participants were 8 nursing students from years 1-3 and 8 faculty members. Quantitative results are based on test data from the same second year clinical course during the academic years 2012 and 2013. Students and faculty participated in one-to-one interviews; responses were recorded and coded for themes. Tests were implemented and scored, then data were assessed to classify the types and number of errors. Students identified conceptual understanding deficits, anxiety, low self-efficacy, and numeracy skills as primary challenges in medication dosage calculations. Faculty identified long division as a particular content challenge, and a lack of online resources for students to practice calculations. Lessons and online resources designed as an intervention to target mathematical and concepts and skills led to improved results and increases in overall pass rates for second year students for medication dosage calculation tests. This study suggests that with concerted effort and a multi-modal approach to supporting nursing students, their abilities to calculate dosages can be improved. The positive results in this study also point to the promise of cross-discipline collaborations between nursing and education. Copyright © 2016 Elsevier Ltd. All rights

  8. Increased accuracy in heparin and protamine administration decreases bleeding: a pilot study.

    PubMed

    Runge, Marx; Møller, Christian H; Steinbrüchel, Daniel A

    2009-03-01

    protamine used during CPB. Individual patient managed anticoagulation during cardiac surgery using dose/response curve techniques based on in vitro analysis of heparin and protamine seems to reduce bleeding.

  9. Increased Accuracy in Heparin and Protamine Administration Decreases Bleeding: A Pilot Study

    PubMed Central

    Runge, Marx; Møller, Christian H.; Steinbrüchel, Daniel A.

    2009-01-01

    heparin and initial doses of protamine used during CPB. Individual patient managed anticoagulation during cardiac surgery using dose/response curve techniques based on in vitro analysis of heparin and protamine seems to reduce bleeding. PMID:19361026

  10. Finite element analysis accuracy of the GTC commissioning instrument structure

    NASA Astrophysics Data System (ADS)

    Farah, Alejandro; Godoy, Javier; Velazquez, F.; Espejo, Carlos; Cuevas, Salvador; Bringas, Vicente; Manzo, A.; del Llano, L.; Sanchez, J. L.; Chavoya, Armando; Devaney, Nicholas; Castro, Javier; Cavaller, Luis

    2003-02-01

    Under a contract with the GRANTECAN, the Commissioning Instrument (CI) is a project developed by a team of Mexican scientists and engineers from the Instrumentation Department of the Astronomy Institute at the UNAM and the CIDESI Engineering Center. The CI will verify the Gran Telescopio Canarias (GTC) performance during the commissioning phase between First Light and Day One. The design phase is now completed and the project is currently in the manufacturing phase. The CI main goal is to measure the telescope image quality. To obtain a stable high resolution image, the mechanical structures should be as rigid as possible. This paper describes the several steps of the conceptual design and the Finite Element Analysis (FEA) for the CI mechanical structures. A variety of models were proposed. The FEA was useful to evaluate the displacements, shape modes, weight, and thermal expansions of each model. A set of indicators were compared with decision matrixes. The best performance models were subjected to a re-optimization stage. By applying the same decision method, a CI Structure Model was proposed. The FEA results complied with all the instruments specifications. Displacements values and vibration frequencies are reported.

  11. Knowledge and accuracy of perceived personal risk in underserved women who are at increased risk of breast cancer.

    PubMed

    Cyrus-David, Mfon S

    2010-12-01

    The state of knowledge and personal risk perception among women who are underserved or racial minorities at increased risk of breast cancer (BC) who may be eligible for chemoprevention is limited. The BC knowledge and accuracy of perceived personal risk of a cross-sectional study population of such women residing in the greater Houston Texas area were assessed. The majority had below average knowledge scores and perceived risk inaccurately. The lesser educated were also less knowledgeable. Educational interventions targeted towards this population would enhance their knowledge of BC and empower them to make informed decisions about BC chemoprevention.

  12. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  13. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    NASA Astrophysics Data System (ADS)

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-05-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  14. Navigation accuracy analysis for the Halley flyby phase of a dual comet mission using ion drive

    NASA Technical Reports Server (NTRS)

    Wood, L. J.; Hast, S. L.

    1980-01-01

    A dual comet (Halley Flyby/Tempel 2 Rendezvous) mission, making use of the solar electric propulsion system, is under consideration for a 1985 launch. This paper presents navigation accuracy analysis results for the Halley flyby phase of this mission. Orbit determination and guidance accuracies are presented for the baseline navigation strategy, along with the results of a number of sensitivity studies involving parameters such as data frequencies, data accuracies, ion drive thrust vector errors, comet ephemeris uncertainties, time lags associated with data processing and command sequence generation, probe release time, and navigation coast arc duration.

  15. Surface Accuracy Analysis of Single Panels for the Shanghai 65-M Radio Telescope

    NASA Astrophysics Data System (ADS)

    Fu, Li; Liu, Guoxi; Jin, Chao; Yan, Feng; An, Tao; Zhiqiang, Shen

    We presented the surface accuracy measurements of 5 single panels of the Shanghai 65-meter radio telescope by employing the coordinate measuring machine and laser tracker. The measurement data obtained from the two instruments were analyzed with the common point transformation and CAD surface fitting techniques, respectively. The derived rms uncertainties of panel accuracy from two methods are consistent with each other, and both match the design specification. The simulations of the effects of manufacturing error, gravity, temperature and wind on the panel surface accuracy with the finite element analysis method suggest that the first two factors account for primary sources of the accuracy uncertainty. The panel deformation under concentrated load was analyzed through finite element analysis and experiment, and the comparison error is 5.6%. There is not plastic deformation when people of weight below 70kg installs and remedies the panel.

  16. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  17. Diagnostic accuracy and tolerability of contrast enhanced CT colonoscopy in symptomatic patients with increased risk for colorectal cancer.

    PubMed

    Ozsunar, Yelda; Coskun, Gülten; Delibaş, Naciye; Uz, Burcin; Yükselen, Vahit

    2009-09-01

    We compared the accuracy and tolerability of intravenous contrast enhanced spiral computed tomography colonography (CTC) and optical colonoscopy (OC) for the detection of colorectal neoplasia in symptomatic patients for colorectal neoplasia. A prospective study was performed in 48 patients with symptomatic patients with increased risk for colorectal cancer. Spiral CTC was performed in supine and prone positions after colonic cleansing. The axial, 2D MPR and virtual endoluminal views were analyzed. Results of spiral CTC were compared with OC which was done within 15 days. The psychometric tolerance test was asked to be performed for both CTC and colonoscopy after the procedure. Ten lesions in 9 of 48 patients were found in CTC and confirmed with OC. Two masses and eight polyps, consisted of 1 tubulovillous, 1 tubular, 2 villous adenoma, 4 adenomatous polyp, 4 adenocarcinoma, were identified. Lesion prevalence was 21%. Sensitivity, specificity, accuracy, positive and negative predictive values were found 100%, 87%, 89%, 67% and 100%, respectively. Psychometric tolerance test showed that CTC significantly more comfortable comparing with OC (p=0.00). CTC was the preferred method in 37% while OC was preferred in 6% of patients. In both techniques, the most unpleasant part was bowel cleansing. Contrast enhanced CTC is a highly accurate method in detecting colorectal lesions. Since the technique was found to be more comfortable and less time consuming compare to OE, it may be preferable in management of symptomatic patients with increased risk for colorectal cancer.

  18. Analysis of proctor marking accuracy in a computer-aided personalized system of instruction course.

    PubMed

    Martin, Toby L; Pear, Joseph J; Martin, Garry L

    2002-01-01

    In a computer-aided version of Keller's personalized system of instruction (CAPSI), students within a course were assigned by a computer to be proctors for tests. Archived data from a CAPSI-taught behavior modification course were analyzed to assess proctor accuracy in marking answers as correct or incorrect. Overall accuracy was increased by having each test marked independently by two proctors, and was higher on incorrect answers when the degree of incorrectness was larger.

  19. Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.

    2012-07-01

    This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.

  20. Surface accuracy analysis and mathematical modeling of deployable large aperture elastic antenna reflectors

    NASA Astrophysics Data System (ADS)

    Coleman, Michael J.

    One class of deployable large aperture antenna consists of thin light-weight parabolic reflectors. A reflector of this type is a deployable structure that consists of an inflatable elastic membrane that is supported about its perimeter by a set of elastic tendons and is subjected to a constant hydrostatic pressure. A design may not hold the parabolic shape to within a desired tolerance due to an elastic deformation of the surface, particularly near the rim. We can compute the equilibrium configuration of the reflector system using an optimization-based solution procedure that calculates the total system energy and determines a configuration of minimum energy. Analysis of the equilibrium configuration reveals the behavior of the reflector shape under various loading conditions. The pressure, film strain energy, tendon strain energy, and gravitational energy are all considered in this analysis. The surface accuracy of the antenna reflector is measured by an RMS calculation while the reflector phase error component of the efficiency is determined by computing the power density at boresight. Our error computation methods are tailored for the faceted surface of our model and they are more accurate for this particular problem than the commonly applied Ruze Equation. Previous analytical work on parabolic antennas focused on axisymmetric geometries and loads. Symmetric equilibria are not assumed in our analysis. In addition, this dissertation contains two principle original findings: (1) the typical supporting tendon system tends to flatten a parabolic reflector near its edge. We find that surface accuracy can be significantly improved by fixing the edge of the inflated reflector to a rigid structure; (2) for large membranes assembled from flat sheets of thin material, we demonstrate that the surface accuracy of the resulting inflated membrane reflector can be improved by altering the cutting pattern of the flat components. Our findings demonstrate that the proper choice

  1. Long-term deflections of reinforced concrete elements: accuracy analysis of predictions by different methods

    NASA Astrophysics Data System (ADS)

    Gribniak, Viktor; Bacinskas, Darius; Kacianauskas, Rimantas; Kaklauskas, Gintaris; Torres, Lluis

    2013-08-01

    Long-term deflection response of reinforced concrete flexural members is influenced by the interaction of complex physical phenomena, such as concrete creep, shrinkage and cracking, which makes their prediction difficult. A number of approaches are proposed by design codes with different degrees of simplification and accuracy. This paper statistically investigates accuracy of long-term deflection predictions made by some of the most widely used design codes ( Eurocode 2, ACI 318, ACI 435, and the new Russian code SP 52-101) and a numerical technique proposed by the authors. The accuracy is analyzed using test data of 322 reinforced concrete members from 27 test programs reported in the literature. The predictions of each technique are discussed, and a comparative analysis is made showing the influence of different parameters, such as sustained loading duration, compressive strength of concrete, loading intensity and reinforcement ratio, on the prediction accuracy.

  2. Accuracy and precision of regional multiharmonic Fourier analysis of gated blood-pool images.

    PubMed

    Machac, J; Horowitz, S F; Broder, D; Goldsmith, S J

    1984-12-01

    In order to estimate the precision and accuracy of parameters derived from segmental multiharmonic Fourier analysis of gated blood-pool images, a Monte Carlo computer noise simulation was tested on five sample regional time-activity curves. The first three Fourier harmonics were retained and the precision and accuracy of parameters of ventricular function were calculated, varying the ejection fraction, segment size, and framing rate. Precision improved with higher ejection fraction, higher counts per frame, or higher framing rate. There was no change in precision as the framing rate changed at fixed total counts. Accuracy changed little with changing framing rate. Thus, for segmental analysis there is no advantage to using a higher framing rate. Regions five or more pixels in size are recommended for reliable results. This study provides useful information for the optimization of acquisition and processing conditions for regional gated blood-pool analysis.

  3. Acousto-optical pulsar processor frequency scale calibration for increase accuracy measurement of time of arrival radioemission impulses

    NASA Astrophysics Data System (ADS)

    Esepkina, Nelli A.; Lavrov, Aleksandr P.; Molodyakov, Sergey A.

    2006-04-01

    The acousto-optical processor (AOP) is based on an acousto-optical spectrum analyzer with a CCD photodetector operating in special pipeline mode (shift-and-add mode), which allows spectral components of the input signal to be added with controlled time delay immediately in the CCD photodetector. The proposed AOP was successfully used on radiotelescope RT-64 (Kalyazin Radio Astronomy Observatory FIAN) for the observation of pulsars at 1 .4 GHz in 45 MHz bandwidth. The AOP frequency scale calibration allows increasing accuracy of measurement of time of arrival radioemission pulses. Experimental results on investigation of AOP work on RT-64 and radioemission pulses profiles for pulsar PSR 1937+21 are submitted.

  4. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  5. Accuracy of mucocutaneous leishmaniasis diagnosis using polymerase chain reaction: systematic literature review and meta-analysis

    PubMed Central

    Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro

    2015-01-01

    The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238

  6. Modifications to the accuracy assessment analysis routine SPATL to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    The SPATL is an analysis program in the Accuracy Assessment Software System which makes comparisons between ground truth information and dot labeling for an individual segment. In order to facilitate the aggregation cf this information, SPATL was modified to produce a disk output file containing the necessary information about each segment.

  7. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  8. Open-source electronic data capture system offered increased accuracy and cost-effectiveness compared with paper methods in Africa.

    PubMed

    Dillon, David G; Pirie, Fraser; Rice, Stephen; Pomilla, Cristina; Sandhu, Manjinder S; Motala, Ayesha A; Young, Elizabeth H

    2014-12-01

    Existing electronic data capture options are often financially unfeasible in resource-poor settings or difficult to support technically in the field. To help facilitate large-scale multicenter studies in sub-Saharan Africa, the African Partnership for Chronic Disease Research (APCDR) has developed an open-source electronic questionnaire (EQ). To assess its relative validity, we compared the EQ against traditional pen-and-paper methods using 200 randomized interviews conducted in an ongoing type 2 diabetes case-control study in South Africa. During its 3-month validation, the EQ had a lower frequency of errors (EQ, 0.17 errors per 100 questions; paper, 0.73 errors per 100 questions; P-value ≤0.001), and a lower monetary cost per correctly entered question, compared with the pen-and-paper method. We found no marked difference in the average duration of the interview between methods (EQ, 5.4 minutes; paper, 5.6 minutes). This validation study suggests that the EQ may offer increased accuracy, similar interview duration, and increased cost-effectiveness compared with paper-based data collection methods. The APCDR EQ software is freely available (https://github.com/apcdr/questionnaire). Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Accuracy analysis of the space shuttle solid rocket motor profile measuring device

    NASA Technical Reports Server (NTRS)

    Estler, W. Tyler

    1989-01-01

    The Profile Measuring Device (PMD) was developed at the George C. Marshall Space Flight Center following the loss of the Space Shuttle Challenger. It is a rotating gauge used to measure the absolute diameters of mating features of redesigned Solid Rocket Motor field joints. Diameter tolerance of these features are typically + or - 0.005 inches and it is required that the PMD absolute measurement uncertainty be within this tolerance. In this analysis, the absolute accuracy of these measurements were found to be + or - 0.00375 inches, worst case, with a potential accuracy of + or - 0.0021 inches achievable by improved temperature control.

  11. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. R.; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the missions commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  12. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. Russell; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the mission's commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  13. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  14. Accuracy of urinary human papillomavirus testing for presence of cervical HPV: systematic review and meta-analysis

    PubMed Central

    Pathak, Neha; Dodds, Julie; Khan, Khalid

    2014-01-01

    Objective To determine the accuracy of testing for human papillomavirus (HPV) DNA in urine in detecting cervical HPV in sexually active women. Design Systematic review and meta-analysis. Data sources Searches of electronic databases from inception until December 2013, checks of reference lists, manual searches of recent issues of relevant journals, and contact with experts. Eligibility criteria Test accuracy studies in sexually active women that compared detection of urine HPV DNA with detection of cervical HPV DNA. Data extraction and synthesis Data relating to patient characteristics, study context, risk of bias, and test accuracy. 2×2 tables were constructed and synthesised by bivariate mixed effects meta-analysis. Results 16 articles reporting on 14 studies (1443 women) were eligible for meta-analysis. Most used commercial polymerase chain reaction methods on first void urine samples. Urine detection of any HPV had a pooled sensitivity of 87% (95% confidence interval 78% to 92%) and specificity of 94% (95% confidence interval 82% to 98%). Urine detection of high risk HPV had a pooled sensitivity of 77% (68% to 84%) and specificity of 88% (58% to 97%). Urine detection of HPV 16 and 18 had a pooled sensitivity of 73% (56% to 86%) and specificity of 98% (91% to 100%). Metaregression revealed an increase in sensitivity when urine samples were collected as first void compared with random or midstream (P=0.004). Limitations The major limitations of this review are the lack of a strictly uniform method for the detection of HPV in urine and the variation in accuracy between individual studies. Conclusions Testing urine for HPV seems to have good accuracy for the detection of cervical HPV, and testing first void urine samples is more accurate than random or midstream sampling. When cervical HPV detection is considered difficult in particular subgroups, urine testing should be regarded as an acceptable alternative. PMID:25232064

  15. Accuracy of urinary human papillomavirus testing for presence of cervical HPV: systematic review and meta-analysis.

    PubMed

    Pathak, Neha; Dodds, Julie; Zamora, Javier; Khan, Khalid

    2014-09-16

    To determine the accuracy of testing for human papillomavirus (HPV) DNA in urine in detecting cervical HPV in sexually active women. Systematic review and meta-analysis. Searches of electronic databases from inception until December 2013, checks of reference lists, manual searches of recent issues of relevant journals, and contact with experts. Test accuracy studies in sexually active women that compared detection of urine HPV DNA with detection of cervical HPV DNA. Data relating to patient characteristics, study context, risk of bias, and test accuracy. 2 × 2 tables were constructed and synthesised by bivariate mixed effects meta-analysis. 16 articles reporting on 14 studies (1443 women) were eligible for meta-analysis. Most used commercial polymerase chain reaction methods on first void urine samples. Urine detection of any HPV had a pooled sensitivity of 87% (95% confidence interval 78% to 92%) and specificity of 94% (95% confidence interval 82% to 98%). Urine detection of high risk HPV had a pooled sensitivity of 77% (68% to 84%) and specificity of 88% (58% to 97%). Urine detection of HPV 16 and 18 had a pooled sensitivity of 73% (56% to 86%) and specificity of 98% (91% to 100%). Metaregression revealed an increase in sensitivity when urine samples were collected as first void compared with random or midstream (P=0.004). The major limitations of this review are the lack of a strictly uniform method for the detection of HPV in urine and the variation in accuracy between individual studies. Testing urine for HPV seems to have good accuracy for the detection of cervical HPV, and testing first void urine samples is more accurate than random or midstream sampling. When cervical HPV detection is considered difficult in particular subgroups, urine testing should be regarded as an acceptable alternative. © Pathak et al 2014.

  16. Accuracy of shear wave elastography for the diagnosis of prostate cancer: A meta-analysis.

    PubMed

    Sang, Liang; Wang, Xue-Mei; Xu, Dong-Yang; Cai, Yun-Fei

    2017-05-16

    Many studies have established the high diagnostic accuracy of shear wave elastography (SWE) for the detection of prostate cancer (PCa); however, its utility remains a subject of debate. This meta-analysis sought to appraise the overall accuracy of SWE for the detection of PCa. A literature search of the PubMed, Embase, Cochrane Library, Web of Science and CNKI (China National Knowledge Infrastructure) databases was conducted. In all of the included studies, the diagnostic accuracy of SWE was compared with that of histopathology, which was used as a standard. Data were pooled, and the sensitivity, specificity, area under the curve (AUC), positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR) were calculated to estimate the accuracy of SWE. The pooled sensitivity and specificity for the diagnosis of PCa by SWE were 0.844 (95% confidence interval: 0.696-0.927) and 0.860 (0.792-0.908), respectively. The AUC was 0.91 (0.89-0.94), the PLR was 6.017 (3.674-9.853), and the NLR was 0.182 (0.085-0.389). The DOR was 33.069 (10.222-106.982). Thus, SWE exhibited high accuracy for the detection of PCa using histopathology as a diagnostic standard. Moreover, SWE may reduce the number of core biopsies needed.

  17. A meta-analysis of confidence and judgment accuracy in clinical decision making.

    PubMed

    Miller, Deborah J; Spengler, Elliot S; Spengler, Paul M

    2015-10-01

    The overconfidence bias occurs when clinicians overestimate the accuracy of their clinical judgments. This bias is thought to be robust leading to an almost universal recommendation by clinical judgment scholars for clinicians to temper their confidence in clinical decision making. An extension of the Meta-Analysis of Clinical Judgment (Spengler et al., 2009) project, the authors synthesized over 40 years of research from 36 studies, from 1970 to 2011, in which the confidence ratings of 1,485 clinicians were assessed in relation to the accuracy of their judgments about mental health (e.g., diagnostic decision making, violence risk assessment, prediction of treatment failure) or psychological issues (e.g., personality assessment). Using a random effects model a small but statistically significant effect (r = .15; CI = .06, .24) was found showing that confidence is better calibrated with accuracy than previously assumed. Approximately 50% of the total variance between studies was due to heterogeneity and not to chance. Mixed effects and meta-regression moderator analyses revealed that confidence is calibrated with accuracy least when there are repeated judgments, and more when there are higher base rate problems, when decisions are made with written materials, and for earlier published studies. Sensitivity analyses indicate a bias toward publishing smaller sample studies with smaller or negative confidence-accuracy effects. Implications for clinical judgment research and for counseling psychology training and practice are discussed.

  18. Accuracy Analysis of GNSS Networks Based On Observing-Session Duration in Different Years

    NASA Astrophysics Data System (ADS)

    Hilmi Erkoç, Muharrem; Doǧan, Uǧur; Aydın, Cüneyt

    2017-04-01

    The aim of this study is to investigate the accuracy of GPS (Global Positioning System) positioning.The observations have been analyzed to determine how the accuracy of derived relative positions of GPS stations depends on the baseline length, the duration of observing session in different years. For this purpose, we selected three days of each year in 2011, 2012 and 2013 from the GPS observations made in CORS-TR Network in Turkey with 15 stations. The GPS observations were processed in the ITRF 2008 reference frame using the Bernese 5.2 GPS software. The baseline length varies between 82 km and 369 km, session duration varies between 4 h and 24 h. The repeatability of the daily solutions belonging to each year was analyzed carefully to scale the Bernese software cofactor matrices. The root mean square (RMS) values for daily repeatability with respect to the combined 3-day solution are computed. The RMS values are less than 3 mm in the horizontal directions (north and east) and < 8 mm in the vertical direction. The results from the investigation agree with the results derived from the previous models in a few mm level. Moreover, a linear relationship between the observing session duration and the accuracy is observed: accuracy for a station decrease when the distance to the fixed station increases. Keywords: Accuracy, GNSS, Session duration, RMS

  19. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  20. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  1. Accuracy of Pseudo-Inverse Covariance Learning--A Random Matrix Theory Analysis.

    PubMed

    Hoyle, David C

    2011-07-01

    For many learning problems, estimates of the inverse population covariance are required and often obtained by inverting the sample covariance matrix. Increasingly for modern scientific data sets, the number of sample points is less than the number of features and so the sample covariance is not invertible. In such circumstances, the Moore-Penrose pseudo-inverse sample covariance matrix, constructed from the eigenvectors corresponding to nonzero sample covariance eigenvalues, is often used as an approximation to the inverse population covariance matrix. The reconstruction error of the pseudo-inverse sample covariance matrix in estimating the true inverse covariance can be quantified via the Frobenius norm of the difference between the two. The reconstruction error is dominated by the smallest nonzero sample covariance eigenvalues and diverges as the sample size becomes comparable to the number of features. For high-dimensional data, we use random matrix theory techniques and results to study the reconstruction error for a wide class of population covariance matrices. We also show how bagging and random subspace methods can result in a reduction in the reconstruction error and can be combined to improve the accuracy of classifiers that utilize the pseudo-inverse sample covariance matrix. We test our analysis on both simulated and benchmark data sets.

  2. Understanding the accuracy of parental perceptions of child physical activity: a mixed methods analysis

    PubMed Central

    Kesten, Joanna M.; Jago, Russell; Sebire, Simon J.; Edwards, Mark J.; Pool, Laura; Zahra, Jesmond; Thompson, Janice L.

    2016-01-01

    Background Interventions to increase children’s physical activity (PA) have achieved limited success. This may be attributed to inaccurate parental perceptions of their children’s PA and a lack of recognition of a need to change activity levels. Methods Fifty-three parents participated in semi-structured interviews to determine perceptions of child PA. Perceptions were compared to children’s measured MVPA (classified as meeting or not meeting UK guidelines) to produce three categories: “accurate”, “over-estimate”, “under-estimate”. Deductive content analysis was performed to understand the accuracy of parental perceptions. Results All parents of children meeting the PA guidelines accurately perceived their child’s PA; whilst the majority of parents whose child did not meet the guidelines overestimated their PA. Most parents were unconcerned about their child’s PA level, viewing them as naturally active and willing to be active. Qualitative explanations for perceptions of insufficient activity included children having health problems and preferences for inactive pursuits, and parents having difficulty facilitating PA in poor weather and not always observing their child’s PA level. Social comparisons also influenced parental perceptions. Conclusions Strategies to improve parental awareness of child PA are needed. Perceptions of child PA may be informed by child “busyness”, being unaware of activity levels, and social comparisons. PMID:25872227

  3. Diagnostic Accuracy of Intraoperative Techniques for Margin Assessment in Breast Cancer Surgery: A Meta-analysis.

    PubMed

    St John, Edward Robert; Al-Khudairi, Rashed; Ashrafian, Hutan; Athanasiou, Thanos; Takats, Zoltan; Hadjiminas, Dimitri John; Darzi, Ara; Leff, Daniel Richard

    2017-02-01

    The aim of this study was to conduct a systematic review and meta-analysis to clarify the diagnostic accuracy of intraoperative breast margin assessment (IMA) techniques against which the performance of emerging IMA technologies may be compared. IMA techniques have failed to penetrate routine practice due to limitations, including slow reporting times, technical demands, and logistics. Emerging IMA technologies are being developed to reduce positive margin and re-excision rates and will be compared with the diagnostic accuracy of existing techniques. Studies were identified using electronic bibliographic searches up to January 2016. MESH terms and all-field search terms included "Breast Cancer" AND "Intraoperative" AND "Margin." Only clinical studies with raw diagnostic accuracy data as compared with final permanent section histopathology were included. A bivariate model for diagnostic meta-analysis was used to attain overall pooled sensitivity and specificity. Eight hundred thirty-eight unique studies revealed 35 studies for meta-analysis. Pooled sensitivity (Sens), specificity (Spec), and area under the receiver operating characteristic curve (AUROC) values were calculated per group (Sens, Spec, AUROC): frozen section = 86%, 96%, 0.96 (n = 9); cytology = 91%, 95%, 0.98 (n = 11); intraoperative ultrasound = 59%, 81%, 0.78 (n = 4); specimen radiography = 53%, 84%, 0.73 (n = 9); optical spectroscopy = 85%, 87%, 0.88 (n = 3). Pooled data suggest that frozen section and cytology have the greatest diagnostic accuracy. However, these methods are resource intensive and turnaround times for results have prevented widespread international adoption. Emerging technologies need to compete with the diagnostic accuracy of existing techniques while offering advantages in terms of speed, cost, and reliability.

  4. Accuracy evaluation of Kinematic GPS analysis based on the difference of the IGS products

    NASA Astrophysics Data System (ADS)

    Watanabe, T.; Tadokoro, K.; Okuda, T.; Ikuta, R.; Kuno, M.

    2010-12-01

    The Philippine Sea plate subducts beneath the southwest Japan from the Nankai Trough with a rate of about 4-6 cm/year, where great interplate earthquakes have repeatedly occurred every 100-150 years. To clarify the mechanism of earthquake occurrence at such subduction zones, we require the geodetic data obtained from not only onshore area but also offshore area. However it is difficult to estimate the plate interaction in offshore areas, due to the poverty of those data. For this issue, we have conducted seafloor geodetic observation using GPS/Acoustic techniques around the Nankai Trough since 2004. In this system, we estimate the position of a surveying vessel by Kinematic GPS analysis and measure the distance between the vessel and the benchmark on the seafloor by Acoustic measurements. Next, we determine the location of the benchmark and detected crustal movement on the seafloor. Recently, a number of research institute have conducted seafloor geodetic observation after earthquake occurred in offshore area (Tadokoro et al., 2006), and then speedy solution is desired from a viewpoint of not only scientific research but also disaster mitigation. Although we use the IGS final product for its accuracy, the latency of that is longer, about 13 days or more. On the other hand, the IGS ultra-rapid product is updated every 6 hours with the delay of 3 hours. In the previous study, we compared the kinematic GPS solutions using the IGS final and ultra-rapid products. The rover GPS site was located on the roof of a building at Nagoya University and 5 fixed GPS sites were located on the roof of other buildings whose baseline lengths were 30-150 km. Though the standard deviation of the difference between final and ultra-rapid solutions increases with increasing baseline length, which is about 1.6 mm in 150 km baseline. This result showed that the difference was not significant for seafloor geodetic observations. In this study, we investigate the kinematic GPS solutions based

  5. Accuracy of bite mark analysis from food substances: A comparative study

    PubMed Central

    Daniel, M. Jonathan; Pazhani, Ambiga

    2015-01-01

    Aims and Objectives: The aims and objectives of the study were to compare the accuracy of bite mark analysis from three different food substances-apple, cheese and chocolate using two techniques-the manual docking procedure and computer assisted overlay generation technique and to compare the accuracy of the two techniques for bite mark analysis on food substances. Materials and Methods: The individuals who participated in the study were made to bite on three food substances-apple, cheese, and chocolate. Dentate individuals were included in the study. Edentulous individuals and individuals having a missing anterior tooth were excluded from the study. The dental casts of the individual were applied to the positive cast of the bitten food substance to determine docking or matching. Then, computer generated overlays were compared with bite mark pattern on the foodstuff. Results: The results were tabulated and the comparison of bite mark analysis on the three different food substances was analyzed by Kruskall-Wallis ANOVA test and the comparison of the two techniques was analyzed by Spearman's Rho correlation coefficient. Conclusion: On comparing the bite marks analysis from the three food substances-apple, cheese and chocolate, the accuracy was found to be greater for chocolate and cheese than apple. PMID:26816463

  6. Increased prognostic accuracy of TBI when a brain electrical activity biomarker is added to loss of consciousness (LOC).

    PubMed

    Hack, Dallas; Huff, J Stephen; Curley, Kenneth; Naunheim, Roseanne; Ghosh Dastidar, Samanwoy; Prichep, Leslie S

    2017-07-01

    Extremely high accuracy for predicting CT+ traumatic brain injury (TBI) using a quantitative EEG (QEEG) based multivariate classification algorithm was demonstrated in an independent validation trial, in Emergency Department (ED) patients, using an easy to use handheld device. This study compares the predictive power using that algorithm (which includes LOC and amnesia), to the predictive power of LOC alone or LOC plus traumatic amnesia. ED patients 18-85years presenting within 72h of closed head injury, with GSC 12-15, were study candidates. 680 patients with known absence or presence of LOC were enrolled (145 CT+ and 535 CT- patients). 5-10min of eyes closed EEG was acquired using the Ahead 300 handheld device, from frontal and frontotemporal regions. The same classification algorithm methodology was used for both the EEG based and the LOC based algorithms. Predictive power was evaluated using area under the ROC curve (AUC) and odds ratios. The QEEG based classification algorithm demonstrated significant improvement in predictive power compared with LOC alone, both in improved AUC (83% improvement) and odds ratio (increase from 4.65 to 16.22). Adding RGA and/or PTA to LOC was not improved over LOC alone. Rapid triage of TBI relies on strong initial predictors. Addition of an electrophysiological based marker was shown to outperform report of LOC alone or LOC plus amnesia, in determining risk of an intracranial bleed. In addition, ease of use at point-of-care, non-invasive, and rapid result using such technology suggests significant value added to standard clinical prediction. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Development of thermodynamic optimum searching (TOS) to improve the prediction accuracy of flux balance analysis.

    PubMed

    Zhu, Yan; Song, Jiangning; Xu, Zixiang; Sun, Jibin; Zhang, Yanping; Li, Yin; Ma, Yanhe

    2013-03-01

    Flux balance analysis (FBA) has been widely used in calculating steady-state flux distributions that provide important information for metabolic engineering. Several thermodynamics-based methods, for example, quantitative assignment of reaction directionality and energy balance analysis have been developed to improve the prediction accuracy of FBA. However, these methods can only generate a thermodynamically feasible range, rather than the most thermodynamically favorable solution. We therefore developed a novel optimization method termed as thermodynamic optimum searching (TOS) to calculate the thermodynamically optimal solution, based on the second law of thermodynamics, the minimum magnitude of the Gibbs free energy change and the maximum entropy production principle (MEPP). Then, TOS was applied to five physiological conditions of Escherichia coli to evaluate its effectiveness. The resulting prediction accuracy was found significantly improved (10.7-48.5%) by comparing with the (13)C-fluxome data, indicating that TOS can be considered an advanced calculation and prediction tool in metabolic engineering.

  8. Accuracy and Numerical Stabilty Analysis of Lattice Boltzmann Method with Multiple Relaxation Time for Incompressible Flows

    NASA Astrophysics Data System (ADS)

    Pradipto; Purqon, Acep

    2017-07-01

    Lattice Boltzmann Method (LBM) is the novel method for simulating fluid dynamics. Nowadays, the application of LBM ranges from the incompressible flow, flow in the porous medium, until microflows. The common collision model of LBM is the BGK with a constant single relaxation time τ. However, BGK suffers from numerical instabilities. These instabilities could be eliminated by implementing LBM with multiple relaxation time. Both of those scheme have implemented for incompressible 2 dimensions lid-driven cavity. The stability analysis has done by finding the maximum Reynolds number and velocity for converged simulations. The accuracy analysis is done by comparing the velocity profile with the benchmark results from Ghia, et al and calculating the net velocity flux. The tests concluded that LBM with MRT are more stable than BGK, and have a similar accuracy. The maximum Reynolds number that converges for BGK is 3200 and 7500 for MRT respectively.

  9. Diagnostic accuracy of noninvasive fetal Rh genotyping from maternal blood--a meta-analysis.

    PubMed

    Geifman-Holtzman, Ossie; Grotegut, Chad A; Gaughan, John P

    2006-10-01

    The purpose of this study was to determine the reported diagnostic accuracy, the validity, and the current limitations of fetal Rh genotyping from peripheral maternal blood based on the existing English-written publications. A search of the English literature describing fetal RhD determination from maternal blood was conducted. From each study, we determined the number of samples tested, fetal RhD genotype, the source of the fetal DNA (maternal plasma, serum, or fetal cells), gestational age, and confirmation of fetal Rh type. The presence of alloimmunization and exclusions of tested samples were noted. For the meta-analysis we calculated composite estimates using 2 random effects models, weighted GLM and Bayesian. Sensitivity, specificity, positive and negative predictive values were calculated. We identified 37 English-written publications that included 44 protocols reporting noninvasive Rh genotyping using fetal DNA obtained from maternal blood on a total of 3261 samples. A total of 183 (183/3261, 5.6%) samples were excluded from the meta-analysis. The overall diagnostic accuracy after exclusions was 94.8%. The gestational ages ranged between 8 and 42 weeks gestation. Maternal serum and plasma were found to be the best source for accurate diagnosis of fetal RhD type in 394/410 (96.1%) and 2293/2377 (96.5%), respectively. There were 719/783 (91.8%) alloimmunized patients that were correctly diagnosed. There were 16 studies that reported 100% diagnostic accuracy in their fetal RhD genotyping. The diagnostic accuracy of noninvasive fetal Rh determination using maternal peripheral blood is 94.8%. Its use can be applicable to Rh prophylaxis and to the management of Rh alloimmunized pregnancies. Improvements of the technique and further study of structure and rearrangements of the RhD gene may improve accuracy of testing and enable large-scale, risk-free fetal RhD genotyping using maternal blood.

  10. Accuracy of urea breath test in Helicobacter pylori infection: meta-analysis.

    PubMed

    Ferwana, Mazen; Abdulmajeed, Imad; Alhajiahmed, Ali; Madani, Wedad; Firwana, Belal; Hasan, Rim; Altayar, Osama; Limburg, Paul J; Murad, Mohammad Hassan; Knawy, Bandar

    2015-01-28

    To quantitatively summarize and appraise the available evidence of urea breath test (UBT) use to diagnose Helicobacter pylori (H. pylori) infection in patients with dyspepsia and provide pooled diagnostic accuracy measures. We searched MEDLINE, EMBASE, Cochrane library and other databases for studies addressing the value of UBT in the diagnosis of H. pylori infection. We included cross-sectional studies that evaluated the diagnostic accuracy of UBT in adult patients with dyspeptic symptoms. Risk of bias was assessed using QUADAS (Quality Assessment of Diagnostic Accuracy Studies)-2 tool. Diagnostic accuracy measures were pooled using the random-effects model. Subgroup analysis was conducted by UBT type (13C vs 14C) and by measurement technique (Infrared spectrometry vs Isotope Ratio Mass Spectrometry). Out of 1380 studies identified, only 23 met the eligibility criteria. Fourteen studies (61%) evaluated 13C UBT and 9 studies (39%) evaluated 14C UBT. There was significant variation in the type of reference standard tests used across studies.Pooled sensitivity was 0.96 (95%CI: 0.95-0.97) andpooled specificity was 0.93 (95%CI: 0.91-0.94). Likelihood ratio for a positive test was 12 and for a negative test was 0.05 with an area under thecurve of 0.985. Meta-analyses were associated with a significant statistical heterogeneity that remained unexplained after subgroup analysis. The included studies had a moderate risk of bias. UBT has high diagnostic accuracy for detecting H. pylori infection in patients with dyspepsia. The reliability of diagnostic meta-analytic estimates however is limited by significant heterogeneity.

  11. Measuring diagnostic and predictive accuracy in disease management: an introduction to receiver operating characteristic (ROC) analysis.

    PubMed

    Linden, Ariel

    2006-04-01

    Diagnostic or predictive accuracy concerns are common in all phases of a disease management (DM) programme, and ultimately play an influential role in the assessment of programme effectiveness. Areas, such as the identification of diseased patients, predictive modelling of future health status and costs and risk stratification, are just a few of the domains in which assessment of accuracy is beneficial, if not critical. The most commonly used analytical model for this purpose is the standard 2 x 2 table method in which sensitivity and specificity are calculated. However, there are several limitations to this approach, including the reliance on a single defined criterion or cut-off for determining a true-positive result, use of non-standardized measurement instruments and sensitivity to outcome prevalence. This paper introduces the receiver operator characteristic (ROC) analysis as a more appropriate and useful technique for assessing diagnostic and predictive accuracy in DM. Its advantages include; testing accuracy across the entire range of scores and thereby not requiring a predetermined cut-off point, easily examined visual and statistical comparisons across tests or scores, and independence from outcome prevalence. Therefore the implementation of ROC as an evaluation tool should be strongly considered in the various phases of a DM programme.

  12. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  13. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  14. Accuracy and repeatability of Roentgen stereophotogrammetric analysis (RSA) for measuring knee laxity in longitudinal studies.

    PubMed

    Fleming, B C; Peura, G D; Abate, J A; Beynnon, B D

    2001-10-01

    Roentgen stereophotogrammetric analysis (RSA) can be used to assess temporal changes in anterior-posterior (A-P) knee laxity. However, the accuracy and precision of RSA is dependent on many factors and should be independently evaluated for a particular application. The objective of this study was to evaluate the use of RSA for measuring A-P knee laxity. The specific aims were to assess the variation or "noise" inherent to RSA, to determine the reproducibility of RSA for repeated A-P laxity testing, and to assess the accuracy of these measurements. Two experiments were performed. The first experiment utilized three rigid models of the tibiofemoral joint to assess the noise and to compare digitization errors of two independent examiners. No differences were found in the kinematic outputs of the RSA due to examiner, repeated trials, or the model used. In a second experiment, A-P laxity values between the A-P shear load limits of +/-60 N of five cadaver goat knees were measured to assess the error associated with repeated testing. The RSA laxity values were also compared to those obtained from a custom designed linkage system. The mean A-P laxity values with the knee 30 degrees, 60 degrees, and 90 degrees of flexion for the ACL-intact goat knee (+/-95% confidence interval) were 0.8 (+/-0.25), 0.9 (+/-0.29), and 0.4 (+/-0.22) mm, respectively. In the ACL-deficient knee, the A-P laxity values increased by an order of magnitude to 8.8 (+/-1.39), 7.6 (+/-1.32), and 3.1 (+/-1.20)mm, respectively. No significant differences were found between the A-P laxity values measured by RSA and the independent measurement technique. A highly significant linear relationship (r(2)=0.83) was also found between these techniques. This study suggests that the RSA method is an accurate and precise means to measure A-P knee laxity for repeated testing over time.

  15. Systematic Review and Meta-analysis of Diagnostic Accuracy of Percutaneous Renal Tumour Biopsy.

    PubMed

    Marconi, Lorenzo; Dabestani, Saeed; Lam, Thomas B; Hofmann, Fabian; Stewart, Fiona; Norrie, John; Bex, Axel; Bensalah, Karim; Canfield, Steven E; Hora, Milan; Kuczyk, Markus A; Merseburger, Axel S; Mulders, Peter F A; Powles, Thomas; Staehler, Michael; Ljungberg, Borje; Volpe, Alessandro

    2016-04-01

    The role of percutaneous renal tumour biopsy (RTB) remains controversial due to uncertainties regarding its diagnostic accuracy and safety. We performed a systematic review and meta-analysis to determine the safety and accuracy of percutaneous RTB for the diagnosis of malignancy, histologic tumour subtype, and grade. Medline, Embase, and Cochrane Library were searched for studies providing data on diagnostic accuracy and complications of percutaneous core biopsy (CB) or fine-needle aspiration (FNA) of renal tumours. A meta-analysis was performed to obtain pooled estimates of sensitivity and specificity for diagnosis of malignancy. The Cohen kappa coefficient (κ) was estimated for the analysis of histotype/grade concordance between diagnosis on RTB and surgical specimen. Risk of bias assessment was performed (QUADAS-2). A total of 57 studies recruiting 5228 patients were included. The overall median diagnostic rate of RTB was 92%. The sensitivity and specificity of diagnostic CBs and FNAs were 99.1% and 99.7%, and 93.2% and 89.8%, respectively. A good (κ = 0.683) and a fair (κ = 0.34) agreement were observed between histologic subtype and Fuhrman grade on RTB and surgical specimen, respectively. A very low rate of Clavien ≥ 2 complications was reported. Study limitations included selection and differential-verification bias. RTB is safe and has a high diagnostic yield in experienced centres. Both CB and FNA have good accuracy for the diagnosis of malignancy and histologic subtype, with better performance for CB. The accuracy for Fuhrman grade is fair. Overall, the quality of the evidence was moderate. Prospective cohort studies recruiting consecutive patients and using homogeneous reference standards are required. We systematically reviewed the literature to assess the safety and diagnostic performance of renal tumour biopsy (RTB). The results suggest that RTB has good accuracy in diagnosing renal cancer and its subtypes, and it appears to be safe. However, the

  16. Accuracy of three-dimensional seismic ground response analysis in time domain using nonlinear numerical simulations

    NASA Astrophysics Data System (ADS)

    Liang, Fayun; Chen, Haibing; Huang, Maosong

    2017-07-01

    To provide appropriate uses of nonlinear ground response analysis for engineering practice, a three-dimensional soil column with a distributed mass system and a time domain numerical analysis were implemented on the OpenSees simulation platform. The standard mesh of a three-dimensional soil column was suggested to be satisfied with the specified maximum frequency. The layered soil column was divided into multiple sub-soils with a different viscous damping matrix according to the shear velocities as the soil properties were significantly different. It was necessary to use a combination of other one-dimensional or three-dimensional nonlinear seismic ground analysis programs to confirm the applicability of nonlinear seismic ground motion response analysis procedures in soft soil or for strong earthquakes. The accuracy of the three-dimensional soil column finite element method was verified by dynamic centrifuge model testing under different peak accelerations of the earthquake. As a result, nonlinear seismic ground motion response analysis procedures were improved in this study. The accuracy and efficiency of the three-dimensional seismic ground response analysis can be adapted to the requirements of engineering practice.

  17. Analysis and Improvement of Geo-Referencing Accuracy in Long Term Global AVHRR Data

    NASA Astrophysics Data System (ADS)

    Khlopenkov, K.; Minnis, P.

    2011-12-01

    Precise geolocation is one of the fundamental requirements for generating high-quality Advanced Very High Resolution Radiometer (AVHRR) Satellite Climate Data Record (SCDR) at 1-km spatial resolution for climate applications. The Global Climate Observing System (GCOS) and Committee on Earth Observing Satellites (CEOS) identified the requirement for the accuracy of geolocation of satellite data for climate applications as 1/3 field-of-view (FOV). This requirement for AVHRR series on the National Oceanic and Atmospheric Administration (NOAA) platforms cannot be met without implementing the ground control point (GCP) correction, especially for historical data, because of the limited accuracy of orbit models and uncertainty in the satellite attitude angles. This work presents a new analysis of the geo-referencing accuracy of global AVHRR data, that uses an automated image matching at pre-selected GCP locations. As a reference image, we have been using the clear-sky monthly composite imagery derived from Moderate Resolution Imaging Spectroradiometer (MODIS) MOD09 dataset at 250-m resolution. The image matching technique is applicable to processing not only the daytime observations from optical solar bands, but also the nighttime imagery by using the long wave thermal channels. The method includes the ortho-rectification to correct for surface elevation and achieves the sub-pixel accuracy in both along-scan and along-track directions. The produced image displacement map is then used to derive a correction to satellite clock error and the attitude angles. The statistics and pattern of these corrections have been analyzed for different NOAA Polar-orbiting satellites by using the HRPT, LAC, and GAC data sets. The application of the developed processing system showed that the algorithm achieved better than 1/3 FOV geolocation accuracy for most of AVHRR 1-km scenes. It has a high efficiency rate (over 97%) for global AVHRR data from NOAA-6 through NOAA-19.

  18. Accuracy and repeatability of an optical motion analysis system for measuring small deformations of biological tissues.

    PubMed

    Liu, Helen; Holt, Cathy; Evans, Sam

    2007-01-01

    Optical motion analysis techniques have been widely used in biomechanics for measuring large-scale motions such as gait, but have not yet been significantly explored for measuring smaller movements such as the tooth displacements under load. In principle, very accurate measurements could be possible and this could provide a valuable tool in many engineering applications. The aim of this study was to evaluate accuracy and repeatability of the Qualisys ProReflex-MCU120 system when measuring small displacements, as a step towards measuring tooth displacements to characterise the properties of the periodontal ligament. Accuracy and repeatability of the system was evaluated using a wedge comparator with a resolution of 0.25 microm to provide measured marker displacements in three orthogonal directions. The marker was moved in ten steps in each direction, for each of seven step sizes (0.5, 1, 2, 3, 5, 10, and 20 microm), repeated five times. Spherical and diamond markers were tested. The system accuracy (i.e. percentage of maximum absolute error in range/measurement range), in the 20-200 microm ranges, was +/-1.17%, +/-1.67% and +/-1.31% for the diamond marker in x, y and z directions, while the system accuracy for the spherical marker was +/-1.81%, +/-2.37% and +/-1.39%. The system repeatability (i.e. maximum standard deviation in the measurement range) measured under the different days, light intensity and temperatures for five times, carried out step up and then step down measurements for the same step size, was +/-1.7, +/-2.3 and +/-1.9 microm for the diamond marker, and +/-2.6, +/-3.9 and +/-1.9 microm for the spherical marker in x, y and z directions, respectively. These results demonstrate that the system suffices accuracy for measuring tooth displacements and could potentially be useful in many other applications.

  19. Development of a Class of Smoothness-Increasing-Accuracy-Conserving (SIAC) Methods for Post-Processing Discontinuous Galerkin Solutions

    DTIC Science & Technology

    2013-07-01

    the theoretical extensions, pointwise error estimates demonstrating that higher-order accuracy of order 2k+2 –[d/2] is indeed achieved in the L∞-norm...estimates to the entire domain were also done. This was a significant extension as pointwise error estimates will be more useful for quantifying

  20. Accuracy analysis of CryoSat-2 SARIn mode data over Antarctica

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Bamber, Jonathan; Cheng, Xiao

    2015-04-01

    In 2010, CryoSat-2 was launched, carrying a unique satellite radar altimetry (SRA) instrument called SAR/Interferometric Radar Altimeter (SIRAL), with the aim of measuring and monitoring sea ice, ice sheets and mountain glaciers. The novel SAR Interferometric mode (SARInM) of CryoSat-2 is designed to improve the accuracy, resolution and geolocation of height measurements over the steeper margins of ice sheets and ice caps. Over these areas, it employs the synthetic aperture radar (SAR) capability to reduce the size of the footprint to effectively 450m along track and ~1km across track implemented from an airborne prototype originally termed a delay-Doppler altimeter. Additionally, CryoSat-2 used the phase difference between its two antennas to estimate surface slope in the across-track direction and identify the point of closed approach directly. The phase difference is 2pi for a surface slope of approximately 1deg. If the slope is above this threshold, the tracked surface in the returned waveform may be not the point of closed approach causing an error in slope correction. For this reason, the analysis was limited to slopes of 1deg or less in this study. We used extensive coverage of Antarctica provided by the ICESat laser altimeter mission between 2003 and 2009 to assess the accuracy of SARInM data. We corrected for changes in elevations due to the interval between the acquisition of the ICESat and CryoSat-2 data (from July 2010 and December 2013). Two methods were used: (1) the ICESat point was compared with a DEM derived from CryoSat-2 data (Point-to-DEM; PtoDEM), and (2) the ICESat point was compared with a CryoSat-2 point directly (Point-to-Point; PtoP). For PtoDEM, CryoSat-2 elevations were interpolated onto a regular 1km polar stereographic grid with a standard parallel of 71°S, using ordinary kriging. For PtoP, the maximum distance between a CryoSat-2 point location and ICESat point location was set to 35m. For the areas with slopes less than 0.2deg, the

  1. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials.

    PubMed

    Casans, Silvia; Rosado-Muñoz, Alfredo; Iakymchuk, Taras

    2016-12-14

    Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1 % accuracy for temperatures below 30 ∘ C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘ C for values below 10 GΩ and systematic deviations for higher values. Power supply V i applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘ C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa) was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves) for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, V i affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties.

  2. Meta-Analysis: Diagnostic Accuracy of Anti-Cyclic Citrullinated Peptide Antibody for Juvenile Idiopathic Arthritis

    PubMed Central

    Wang, Yan; Pei, Fengyan; Wang, Xingjuan; Sun, Zhiyu; Hu, Chengjin; Dou, Hengli

    2015-01-01

    Objective. To estimate the diagnostic accuracy of the anti-CCP test in JIA and to evaluate factors associated with higher accuracy. Methods. Two investigators performed an extensive search of the literature published between January 2000 and January 2014. The included articles were assessed by the Quality Assessment of Diagnostic Accuracy Studies tool. The meta-analysis was performed using a summary ROC (SROC) curve and a bivariate random-effect model to estimate sensitivity and specificity across studies. Results. The bivariate meta-analysis yielded a pooled sensitivity and specificity of 10% (95% confidence interval (CI): 6.0%–15.0%) and 99.0% (95% CI: 98.0%–100.0%). The area under the SROC curve was 0.96. Sensitivity estimates were highly heterogeneous, which was partially explained by the higher sensitivity in the rheumatoid factor-positive polyarthritis (RF+ PA) subtype (48.0%; 95% CI: 31.0%–65.0%) than in the other subtypes (17.0%; 95% CI: 14.0%–20.0%) and the higher sensitivity of the Inova assay (17.0%; 95% CI: 14.0%–20.%%) than the other assays (0.05%; 95% CI: 2.0%–11.0%). Conclusions. Anti-CCP antibody test has a high specificity for the diagnosis of JIA. The sensitivity of this test is low and varies across populations but is higher in RF+ PA than in other JIA subtypes. PMID:25789331

  3. Meta-analysis: diagnostic accuracy of anti-cyclic citrullinated peptide antibody for juvenile idiopathic arthritis.

    PubMed

    Wang, Yan; Pei, Fengyan; Wang, Xingjuan; Sun, Zhiyu; Hu, Chengjin; Dou, Hengli

    2015-01-01

    To estimate the diagnostic accuracy of the anti-CCP test in JIA and to evaluate factors associated with higher accuracy. Two investigators performed an extensive search of the literature published between January 2000 and January 2014. The included articles were assessed by the Quality Assessment of Diagnostic Accuracy Studies tool. The meta-analysis was performed using a summary ROC (SROC) curve and a bivariate random-effect model to estimate sensitivity and specificity across studies. The bivariate meta-analysis yielded a pooled sensitivity and specificity of 10% (95% confidence interval (CI): 6.0%-15.0%) and 99.0% (95% CI: 98.0%-100.0%). The area under the SROC curve was 0.96. Sensitivity estimates were highly heterogeneous, which was partially explained by the higher sensitivity in the rheumatoid factor-positive polyarthritis (RF+ PA) subtype (48.0%; 95% CI: 31.0%-65.0%) than in the other subtypes (17.0%; 95% CI: 14.0%-20.0%) and the higher sensitivity of the Inova assay (17.0%; 95% CI: 14.0%-20.%%) than the other assays (0.05%; 95% CI: 2.0%-11.0%). Anti-CCP antibody test has a high specificity for the diagnosis of JIA. The sensitivity of this test is low and varies across populations but is higher in RF+ PA than in other JIA subtypes.

  4. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials

    PubMed Central

    Casans, Silvia; Rosado-Muñoz, Alfredo; Iakymchuk, Taras

    2016-01-01

    Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1% accuracy for temperatures below 30 ∘C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘C for values below 10 GΩ and systematic deviations for higher values. Power supply Vi applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa) was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves) for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, Vi affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties. PMID:27983652

  5. Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.

    PubMed

    Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2014-08-01

    The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be

  6. Local crystal structure analysis with 10-pm accuracy using scanning transmission electron microscopy.

    PubMed

    Saito, Mitsuhiro; Kimoto, Koji; Nagai, Takuro; Fukushima, Shun; Akahoshi, Daisuke; Kuwahara, Hideki; Matsui, Yoshio; Ishizuka, Kazuo

    2009-06-01

    We demonstrate local crystal structure analysis based on annular dark-field (ADF) imaging in scanning transmission electron microscopy (STEM). Using a stabilized STEM instrument and customized software, we first realize high accuracy of elemental discrimination and atom-position determination with a 10-pm-order accuracy, which can reveal major cation displacements associated with a variety of material properties, e.g. ferroelectricity and colossal magnetoresistivity. A-site ordered/disordered perovskite manganites Tb(0.5)Ba(0.5)MnO(3) are analysed; A-site ordering and a Mn-site displacement of 12 pm are detected in each specific atomic column. This method can be applied to practical and advanced materials, e.g. strongly correlated electron materials.

  7. Accuracy analysis of height difference models derived from terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Glira, Philipp; Briese, Christian; Pfeifer, Norbert; Dusik, Jana; Hilger, Ludwig; Neugirg, Fabian; Baewert, Henning

    2014-05-01

    In many research areas the temporal development of the earth surface topography is investigated for geomorphological analysis (e.g. landslide monitoring). Terrestrial laser scanning (TLS) often is used for this purpose, as it allows a fast and detailed 3d reconstruction of the sampled object. The temporal development of the earth surface usually is investigated on the basis of rasterized data, i.e. digital terrain models (DTM). The difference between two DTMs - the difference model - should preferably correspond to the terrain height changes occurred between the measurement campaigns. Actually, these height differences can be influenced by numerous potential error sources. The height accuracy of each raster cell is affected primarily by (a) the measurement accuracy of the deployed TLS, (b) the terrain topography (e.g. roughness), (c) the registration accuracy, (d) the georeferencing accuracy and (e) the raster interpolation method. Thus, in this contribution, height differences are treated as stochastic variables in order to estimate their precision. For an accurate estimation of the height difference precision a detailed knowledge about the whole processing pipeline (from the raw point clouds to the final difference model) is essential. In this study, first the height difference precision is estimated by a rigorous error propagation. As main result, for each raster cell of the difference model, a corresponding height error is estimated, forming an error map. A statistical hypothesis test is presented in order to judge the significance of a height difference. Furthermore, in order to asses the effect of single factors on the final height difference precision, multivariate statistic methods are applied. This analysis allows the deduction of a simple error propagation model, neglecting error sources with small impact on the final precision. The proposed method is demonstrated by means of TLS data acquired at the Gepatschferner (Tyrol, Austria). This study was carried

  8. Menu label accuracy at a university's foodservices. An exploratory recipe nutrition analysis.

    PubMed

    Feldman, Charles; Murray, Douglas; Chavarria, Stephanie; Zhao, Hang

    2015-09-01

    The increase in the weight of American adults and children has been positively associated with the prevalence of the consumption of food-away-from-home. The objective was to assess the accuracy of claimed nutritional information of foods purchased in contracted foodservices located on the campus of an institution of higher education. Fifty popular food items were randomly collected from five main dining outlets located on a selected campus in the northeastern United States. The sampling was repeated three times on separate occasions for an aggregate total of 150 food samples. The samples were then weighed and assessed for nutrient composition (protein, cholesterol, fiber, carbohydrates, total fat, calories, sugar, and sodium) using nutrient analysis software. Results were compared with foodservices' published nutrition information. Two group comparisons, claimed and measured, were performed using the paired-sample t-test. Descriptive statistics were used as well. Among the nine nutritional values, six nutrients (total fat, sodium, protein, fiber, cholesterol, and weight) had more than 10% positive average discrepancies between measured and claimed values. Statistical significance of the variance was obtained in four of the eight categories of nutrient content: total fat, sodium, protein, and cholesterol (P < .05). Significance was also reached in the variance of actual portion weight compared to the published claims (P < .001). Significant differences of portion size (weight), total fat, sodium, protein, and cholesterol were found among the sampled values and the foodservices' published claims. The findings from this study raise the concern that if the actual nutritional information does not accurately reflect the declared values on menus, conclusions, decisions and actions based on posted information may not be valid.

  9. A newly developed peripheral anterior chamber depth analysis system: principle, accuracy, and reproducibility

    PubMed Central

    Kashiwagi, K; Kashiwagi, F; Toda, Y; Osada, K; Tsumura, T; Tsukahara, S

    2004-01-01

    Aim: To develop a new, non-contact system for measuring anterior chamber depth (ACD) quantitatively, and to investigate its accuracy as well as interobserver and intraobserver reproducibility. Methods: The system scanned the ACD from the optical axis to the limbus in approximately 0.5 second and took 21 consecutive slit lamp images at 0.4 mm intervals. A computer installed program automatically evaluated the ACD, central corneal thickness (CT), and corneal radius of curvature (CRC) instantly. A dummy eye was used for investigating measurement accuracy. The effects of CT and CRC on the measurement results were examined using a computer simulation model to minimise measurement errors. Three examiners measured the ACD in 10 normal eyes, and interobserver and intraobserver reproducibility was analysed. Results: The ACD values measured by this system were very similar to theoretical values. Increase of CRC and decrease in CT decreased ACD and vice versa. Data calibration using evaluated CT and CRC successfully reduced measurement errors. Intraobserver and interobserver variations were small. Their coefficient variation values were 7.4% (SD 2.3%) and 6.7% (0.7%), and these values tended to increase along the distance from the optical axis. Conclusion: The current system can measure ACD with high accuracy as well as high intraobserver and interobserver reproducibility. It has potential use in measuring ACD quantitatively and screening subjects with narrow angle. PMID:15258020

  10. Utilization of Pharmacy Technicians to Increase the Accuracy of Patient Medication Histories Obtained in the Emergency Department.

    PubMed

    Rubin, Ellen C; Pisupati, Radhika; Nerenberg, Steven F

    2016-05-01

    The purpose of this study is to determine the accuracy of a pharmacy technician-collected medication history pilot program in the emergency department. This was completed by reviewing all elements of the technician activity by direct observation and by verifying the technician-collected medication list through a second phone call by a pharmacist to the outpatient pharmacy. This was a retrospective, single-center study conducted from March to April 2015. Four certified pharmacy technicians were trained by a postgraduate year 1 (PGY1) pharmacy practice resident on how to collect, verify, and accurately enter medication histories into the electronic medical record. Accuracy of pharmacy technician-collected medication histories was verified by a pharmacist through observation of their patient interviews, review of technician-completed medication history forms, and by contacting the patient's outpatient pharmacy. The pharmacy technician-completed medication histories resulted in an absolute risk reduction of errors of 50% and a relative risk reduction of errors of 77% (p < .001) in comparison to medication histories collected by non-pharmacy personnel. With high accuracy rates, pharmacy technicians proved to be a valuable asset to the medication history process and can enhance patient safety during care transitions. The results of this study further support the Pharmacy Practice Model Initiative vision to advance the pharmacy technician role to improve the process of medication history taking and reconciliation within the health care system.

  11. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  12. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  13. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  14. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  15. Analysis of the accuracy and readability of herbal supplement information on Wikipedia.

    PubMed

    Phillips, Jennifer; Lam, Connie; Palmisano, Lisa

    2014-01-01

    To determine the completeness and readability of information found in Wikipedia for leading dietary supplements and assess the accuracy of this information with regard to safety (including use during pregnancy/lactation), contraindications, drug interactions, therapeutic uses, and dosing. Cross-sectional analysis of Wikipedia articles. The contents of Wikipedia articles for the 19 top-selling herbal supplements were retrieved on July 24, 2012, and evaluated for organization, content, accuracy (as compared with information in two leading dietary supplement references) and readability. Accuracy of Wikipedia articles. No consistency was noted in how much information was included in each Wikipedia article, how the information was organized, what major categories were used, and where safety and therapeutic information was located in the article. All articles in Wikipedia contained information on therapeutic uses and adverse effects but several lacked information on drug interactions, pregnancy, and contraindications. Wikipedia articles had 26%-75% of therapeutic uses and 76%-100% of adverse effects listed in the Natural Medicines Comprehensive Database and/or Natural Standard. Overall, articles were written at a 13.5-grade level, and all were at a ninth-grade level or above. Articles in Wikipedia in mid-2012 for the 19 top-selling herbal supplements were frequently incomplete, of variable quality, and sometimes inconsistent with reputable sources of information on these products. Safety information was particularly inconsistent among the articles. Patients and health professionals should not rely solely on Wikipedia for information on these herbal supplements when treatment decisions are being made.

  16. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  17. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  18. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  19. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-09-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  20. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  1. Accuracy of surface tension measurement from drop shapes: the role of image analysis.

    PubMed

    Kalantarian, Ali; Saad, Sameh M I; Neumann, A Wilhelm

    2013-11-01

    Axisymmetric Drop Shape Analysis (ADSA) has been extensively used for surface tension measurement. In essence, ADSA works by matching a theoretical profile of the drop to the extracted experimental profile, taking surface tension as an adjustable parameter. Of the three main building blocks of ADSA, i.e. edge detection, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure, only edge detection (that extracts the drop profile line from the drop image) needs extensive study. For the purpose of this article, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure will only require a minor effort. It is the aim of this paper to investigate how far the surface tension accuracy of drop shape techniques can be pushed by fine tuning and optimizing edge detection strategies for a given drop image. Two different aspects of edge detection are pursued here: sub-pixel resolution and pixel resolution. The effect of two sub-pixel resolution strategies, i.e. spline and sigmoid, on the accuracy of surface tension measurement is investigated. It is found that the number of pixel points in the fitting procedure of the sub-pixel resolution techniques is crucial, and its value should be determined based on the contrast of the image, i.e. the gray level difference between the drop and the background. On the pixel resolution side, two suitable and reliable edge detectors, i.e. Canny and SUSAN, are explored, and the effect of user-specified parameters of the edge detector on the accuracy of surface tension measurement is scrutinized. Based on the contrast of the image, an optimum value of the user-specified parameter of the edge detector, SUSAN, is suggested. Overall, an accuracy of 0.01mJ/m(2) is achievable for the surface tension determination by careful fine tuning of edge detection algorithms.

  2. Increasing the Accuracy in the Measurement of the Minor Isotopes of Uranium: Care in Selection of Reference Materials, Baselines and Detector Calibration

    NASA Astrophysics Data System (ADS)

    Poths, J.; Koepf, A.; Boulyga, S. F.

    2008-12-01

    The minor isotopes of uranium (U-233, U-234, U-236) are increasingly useful for tracing a variety of processes: movement of anthropogenic nuclides in the environment (ref 1), sources of uranium ores (ref 2), and nuclear material attribution (ref 3). We report on improved accuracy for U-234/238 and U-236/238 by supplementing total evaporation protocol TIMS measurement on Faraday detectors (ref 4)with multiplier measurement for the minor isotopes. Measurement of small signals on Faraday detectors alone is limited by noise floors of the amplifiers and accurate measurement of the baseline offsets. The combined detector approach improves the reproducibility to better than ±1% (relative) for the U-234/238 at natural abundance, and yields a detection limit for U-236/U-238 of <0.2 ppm. We have quantified contribution of different factors to the uncertainties associated with these peak jumping measurement on a single detector, with an aim of further improvement. The uncertainties in the certified values for U-234 and U-236 in the uranium standard NBS U005, if used for mass bias correction, dominates the uncertainty in their isotopic ratio measurements. Software limitations in baseline measurement drives the detection limit for the U-236/U-238 ratio. This is a topic for discussion with the instrument manufacturers. Finally, deviation from linearity of the response of the electron multiplier with count rate limits the accuracy and reproducibility of these minor isotope measurements. References: (1) P. Steier et al(2008) Nuc Inst Meth(B), 266, 2246-2250. (2) E. Keegan et al (2008) Appl Geochem 23, 765-777. (3) K. Mayer et al (1998) IAEA-CN-98/11, in Advances in Destructive and Non-destructive Analysis for Environmental Monitoring and Nuclear Forensics. (4) S. Richter and S. Goldberg(2003) Int J Mass Spectrom, 229, 181-197.

  3. The accuracy of approximate solutions in the analysis of fracture of composites

    NASA Technical Reports Server (NTRS)

    Goree, J. G.

    1985-01-01

    This paper concerns the accuracy of three related mathematical models (developed by Hedgepeth, Eringen and Sendeckyj and Jones) used in the stress analysis and in fracture studies of continuous-fiber composites. These models have particular application in the investigation of fiber and matrix stresses in unidirectional composites in the region near a crack tip. The interest in such models is motivated by the desire to be able to simplify the equations of elasticity to the point that they can be solved in a relatively easy manner.

  4. Accuracy and sensitivity analysis of the conical null-screen based corneal topographer

    NASA Astrophysics Data System (ADS)

    Cossio-Guerrero, Cesar; Campos-García, Manuel

    2016-09-01

    In every optical testing method, the time taken to process data, the precision of the results and the sensitivity are among the most relevant aspects to be taken into account when the viability of its implementation is been under consideration. An accuracy and sensitivity analysis of a topographer based on a conical null-screen with a semi-radial distribution of targets is presented. On the other hand, we proposed a custom evaluation algorithm in order to reduce the time in the calculation of the normal to the corneal surface. Finally, we perform some corneal topographical measurements.

  5. Shortening the retention interval of 24-hour dietary recalls increases fourth-grade children's accuracy for reporting energy and macronutrient intake at school meals.

    PubMed

    Baxter, Suzanne Domel; Guinn, Caroline H; Royer, Julie A; Hardin, James W; Smith, Albert F

    2010-08-01

    Accurate information about children's intake is crucial for national nutrition policy and for research and clinical activities. To analyze accuracy for reporting energy and nutrients, most validation studies utilize the "conventional approach," which was not designed to capture errors of reported foods and amounts. The "reporting-error-sensitive approach" captures errors of reported foods and amounts. To extend results to energy and macronutrients for a validation study concerning retention interval (elapsed time between to-be-reported meals and the interview) and accuracy for reporting school-meal intake, the conventional and reporting-error-sensitive approaches were compared. DESIGN AND PARTICIPANTS/SETTING: Fourth-grade children (n=374) were observed eating two school meals, and interviewed to obtain a 24-hour recall using one of six interview conditions from crossing two target periods (prior 24 hours and previous day) with three interview times (morning, afternoon, and evening). Data were collected in one district during three school years (2004-2005, 2005-2006, and 2006-2007). Report rates (reported/observed), correspondence rates (correctly reported/observed), and inflation ratios (intruded/observed) were calculated for energy and macronutrients. For each outcome measure, mixed-model analysis of variance was conducted with target period, interview time, their interaction, and sex in the model; results were adjusted for school year and interviewer. With the conventional approach, report rates for energy and macronutrients did not differ by target period, interview time, their interaction, or sex. With the reporting-error-sensitive approach, correspondence rates for energy and macronutrients differed by target period (four P values <0.0001) and the target period by interview-time interaction (four P values <0.0001); inflation ratios for energy and macronutrients differed by target period (four P values <0.0001), and inflation ratios for energy and carbohydrate

  6. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  7. On the increase of geometric accuracy with the help of stiffening elements for robot-based incremental sheet metal forming

    NASA Astrophysics Data System (ADS)

    Thyssen, Lars; Seim, Patrick; Störkle, Denis D.; Kuhlenkötter, Bernd

    2016-10-01

    This paper describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The incremental sheet forming (ISF) offers high geometrical form flexibility without the need of any part-dependent tools. To transfer the ISF to industrial applications, it is necessary to respond to the still existing constraints, e.g. the low geometrical accuracy. Especially the subsequent deformation resulting from the interaction of differently shaped elements causes geometrical deviations, which are limiting the scope of formable parts. The impact of the resulting forming forces will vary according to the shape of the individual elements. For this, the paper proposes and examines a new approach to stabilize the geometrical accuracy without losing the universal approach of Roboforming by inserting stiffening elements. Those elements with varying cross-sections at the initial area of various orientations must be examined on their stabilizing or subsequent distorting impact. Especially the different impacts of the subsequent forming of stiffness features in contrast to the direct forming are studied precisely.

  8. Assessment of GPS data for meteorological applications over Africa: Study of error sources and analysis of positioning accuracy

    NASA Astrophysics Data System (ADS)

    Walpersdorf, A.; Bouin, M.-N.; Bock, O.; Doerflinger, E.

    2007-08-01

    The aim of this study is to assess the availability and quality of data from the International GNSS Service (IGS) Global Positioning System (GPS) network in Africa, especially for retrieving zenith tropospheric delay (ZTD), from which precipitable water vapour (PWV) can be derived, in view of application to the African Monsoon Multidisciplinary Analysis (AMMA) project. Three major error sources for the GPS data analysis evaluating PWV in Africa are the accuracy of the satellite orbits, the correction for the radio delay induced by the ionosphere and the vertical site displacements due to ocean loading. The first part of this study examines these error sources and the validity of GPS data for meteorological applications in Africa in dedicated analyses spanning the year 2001. These analyses were performed using the IGS precise orbits. Weak degradation of baseline precision with increasing baseline lengths suggests that the average orbital error is not limiting the GPS analysis in Africa. The impact of the ionosphere has been evaluated during a maximum of solar activity in 2001. The loss of L2 data has actually been observed. It amounts to 2% on average for 2001, with maxima of 8% during magnetic storm events. A slight decrease in formal accuracy of ZTD seems to be related to the loss of L2 data at the end of the day. This indicates that scintillation effects are present in the GPS observations but however are not a major limitation. The impact of ocean loading is found to be significant on ZTD estimates (up to ±2 mm in equivalent PWV). The use of a proper ocean loading model eliminates this effect. The second aspect of this study concerns the IGS analysis quality for the African stations. The accuracy has been assessed through position dispersion between individual solutions and the most recent version of the IGS combined solution IGb00, and residuals from the transformation of the IGS combined solution into the International Terrestrial Reference Frame 2005. The

  9. Diagnostic accuracy of the International HIV Dementia Scale and HIV Dementia Scale: A meta-analysis.

    PubMed

    Hu, Xueying; Zhou, Yang; Long, Jianxiong; Feng, Qiming; Wang, Rensheng; Su, Li; Zhao, Tingting; Wei, Bo

    2012-10-01

    This aim of this study was to assess the diagnostic accuracy of the International HIV Dementia Scale (IHDS) or HIV Dementia Scale (HDS) for the diagnosis of HIV-associated neurocognitive disorders (HAND). A comprehensive and systematic search was carried out in PubMed and EMBASE databases. Sensitivity, specificity, Q(*)-values, summary receiver operating characteristic curves and other measures of accuracy of IHDS or HDS in the diagnosis of HAND were summarized. Summary receiver operator characteristic (SROC) curve analysis for HAND data demonstrates a pooled sensitivity of 0.90 [95% confidence interval (CI), 0.88-0.91] and overall specificity of 0.96 (95% CI, 0.95-0.97) for IHDS, the Q(*)-value for IHDS was 0.9195 and the diagnostic odds ratio (DOR) was 162.28 (95% CI, 91.82-286.81). HDS had an overall sensitivity of 0.39 (95% CI, 0.34-0.43) and specificity of 0.90 (95% CI, 0.89-0.91), the Q(*)-value for HDS was 0.6321 and DOR was 5.81 (95% CI, 3.64-9.82). There was significant heterogeneity for studies that reported IHDS and HDS. This meta-analysis has shown that IHDS and HDS may offer high diagnostic performance accuracy for the detection of HAND in primary health care and resource-limited settings. IHDS and HDS may require reformed neuropsychological characterization of impairments in accordance with regional culture and language in future international studies.

  10. Accuracy of in-office nerve conduction studies for median neuropathy: a meta-analysis.

    PubMed

    Strickland, James W; Gozani, Shai N

    2011-01-01

    Carpal tunnel syndrome is the most common focal neuropathy. It is typically diagnosed clinically and confirmed by abnormal median nerve conduction across the wrist (median neuropathy [MN]). In-office nerve conduction testing devices facilitate performance of nerve conduction studies (NCS) and are used by hand surgeons in the evaluation of patients with upper extremity symptoms. The purpose of this meta-analysis was to determine the diagnostic accuracy of this testing method for MN in symptomatic patients. We searched the MEDLINE database for prospective cohort studies that evaluated the diagnostic accuracy of in-office NCS for MN in symptomatic patients with traditional electrodiagnostic laboratories as reference standards. We assessed included studies for quality and heterogeneity in diagnostic performance and determined pooled statistical outcome measures when appropriate. We identified 5 studies with a total of 448 symptomatic hands. The pooled sensitivity and specificity were 0.88 (95% confidence interval [CI], 0.83-0.91) and 0.93 (95% CI, 0.88-0.96), respectively. Specificities exhibited heterogeneity. The diagnostic odds ratios were homogeneous, with a pooled value of 62.0 (95% CI, 30.1-127). This meta-analysis showed that in-office NCS detects MN with clinically relevant accuracy. Performance was similar to interexaminer agreement for MN within a traditional electrodiagnostic laboratory. There was some variation in diagnostic operating characteristics. Therefore, physicians using this technology should interpret test results within a clinical context and with attention to the pretest probability of MN, rather than in absolute terms. Copyright © 2011 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  11. Accuracy of peripheral thermometers for estimating temperature: a systematic review and meta-analysis.

    PubMed

    Niven, Daniel J; Gaudet, Jonathan E; Laupland, Kevin B; Mrklas, Kelly J; Roberts, Derek J; Stelfox, Henry Thomas

    2015-11-17

    Body temperature is commonly used to screen patients for infectious diseases, establish diagnoses, monitor therapy, and guide management decisions. To determine the accuracy of peripheral thermometers for estimating core body temperature in adults and children. MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and CINAHL Plus from inception to July 2015. Prospective studies comparing the accuracy of peripheral (tympanic membrane, temporal artery, axillary, or oral) thermometers with central (pulmonary artery catheter, urinary bladder, esophageal, or rectal) thermometers. 2 reviewers extracted data on study characteristics, methods, and outcomes and assessed the quality of individual studies. 75 studies (8682 patients) were included. Most studies were at high or unclear risk of patient selection bias (74%) or index test bias (67%). Compared with central thermometers, peripheral thermometers had pooled 95% limits of agreement (random-effects meta-analysis) outside the predefined clinically acceptable range (± 0.5 °C), especially among patients with fever (-1.44 °C to 1.46 °C for adults; -1.49 °C to 0.43 °C for children) and hypothermia (-2.07 °C to 1.90 °C for adults; no data for children). For detection of fever (bivariate random-effects meta-analysis), sensitivity was low (64% [95% CI, 55% to 72%]; I2 = 95.7%; P < 0.001) but specificity was high (96% [CI, 93% to 97%]; I2 = 96.3%; P < 0.001). Only 1 study reported sensitivity and specificity for the detection of hypothermia. High-quality data for some temperature measurement techniques are limited. Pooled data are associated with interstudy heterogeneity that is not fully explained by stratified and metaregression analyses. Peripheral thermometers do not have clinically acceptable accuracy and should not be used when accurate measurement of body temperature will influence clinical decisions. None.

  12. Reliability and accuracy analysis of a new semiautomatic radiographic measurement software in adult scoliosis.

    PubMed

    Aubin, Carl-Eric; Bellefleur, Christian; Joncas, Julie; de Lanauze, Dominic; Kadoury, Samuel; Blanke, Kathy; Parent, Stefan; Labelle, Hubert

    2011-05-20

    Radiographic software measurement analysis in adult scoliosis. To assess the accuracy as well as the intra- and interobserver reliability of measuring different indices on preoperative adult scoliosis radiographs using a novel measurement software that includes a calibration procedure and semiautomatic features to facilitate the measurement process. Scoliosis requires a careful radiographic evaluation to assess the deformity. Manual and computer radiographic process measures have been studied extensively to determine the reliability and reproducibility in adolescent idiopathic scoliosis. Most studies rely on comparing given measurements, which are repeated by the same user or by an expert user. A given measure with a small intra- or interobserver error might be deemed as good repeatability, but all measurements might not be truly accurate because the ground-truth value is often unknown. Thorough accuracy assessment of radiographic measures is necessary to assess scoliotic deformities, compare these measures at different stages or to permit valid multicenter studies. Thirty-four sets of adult scoliosis digital radiographs were measured two times by three independent observers using a novel radiographic measurement software that includes semiautomatic features to facilitate the measurement process. Twenty different measures taken from the Spinal Deformity Study Group radiographic measurement manual were performed on the coronal and sagittal images. Intra- and intermeasurer reliability for each measure was assessed. The accuracy of the measurement software was also assessed using a physical spine model in six different scoliotic configurations as a true reference. The majority of the measures demonstrated good to excellent intra- and intermeasurer reliability, except for sacral obliquity. The standard variation of all the measures was very small: ≤ 4.2° for Cobb angles, ≤ 4.2° for the kyphosis, ≤ 5.7° for the lordosis, ≤ 3.9° for the pelvic angles, and

  13. Increasing the Accuracy of Reading Decoding Skills Exhibited by Hearing-Impaired Students with the Use of a Sound/Letter Unit Instructional Approach.

    ERIC Educational Resources Information Center

    Becker, Katharine E.

    This practicum was designed to increase the accuracy of reading decoding skills exhibited by five elementary and intermediate level hearing-impaired students in a mainstream setting. Subjects were fitted with appropriate amplification to optimize their residual hearing but were performing below their grade-level placement in the areas of word…

  14. Diagnostic accuracy of intravascular ultrasound-derived minimal lumen area compared with fractional flow reserve--meta-analysis: pooled accuracy of IVUS luminal area versus FFR.

    PubMed

    Nascimento, Bruno R; de Sousa, Marcos R; Koo, Bon-Kwon; Samady, Habib; Bezerra, Hiram G; Ribeiro, Antônio L P; Costa, Marco A

    2014-09-01

    Although intravascular ultrasound minimal luminal area (IVUS-MLA) is one of many anatomic determinants of lesion severity, it has been proposed as an alternative to fractional flow reserve (FFR) to assess severity of coronary artery disease. Pool the diagnostic performance of IVUS-MLA and determine its overall accuracy to predict the functional significance of coronary disease using FFR (0.75 or 0.80) as the gold standard. Studies comparing IVUS and FFR to establish the best MLA cutoff value that correlates with significant coronary stenosis were reviewed from a Medline search using the terms "fractional flow reserve" and "ultrasound." DerSimonian Laird method was applied to obtain pooled accuracy. Eleven clinical trials, including two left main (LM) trials (total N = 1,759 patients, 1,953 lesions) were included. The weighted overall mean MLA cutoff was 2.61 mm(2) in non-LM trials and 5.35 mm(2) in LM trials. For non-LM lesions, the pooled sensitivity of MLA was 0.79 (95% CI = 0.76-0.83) and specificity was 0.65 (95% CI = 0.62-0.67). Positive likelihood ratio (LR) was 2.26 (95% CI = 1.98-2.57) and LR- was 0.32 (95% CI = 0.24-0.44). Area under the summary receiver operator curve for all trials was 0.848. Pooled LM trials had better accuracy: sensitivity = 0.90, specificity = 0.90, LR+ = 8.79, and LR- = 0.120. Given its limited pooled accuracy, IVUS-MLA's impact on clinical decision in this scenario is low and may lead to misclassification in up to 20% of the lesions. Pooled analysis points toward lower MLA cutoffs than the ones used in current practice. © 2013 Wiley Periodicals, Inc.

  15. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  16. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  17. Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  18. Does powdering of the dentition increase the accuracy of fusing 3D stereophotographs and digital dental casts.

    PubMed

    Rangel, Frits A; Chiu, Yu-Ting; Maal, Thomas J J; Bronkhorst, Ewald M; Bergé, Stefaan J; Kuijpers-Jagtman, Anne Marie

    2016-08-01

    The shiny vestibular surfaces of teeth make it difficult to match digital dental casts to 3D stereophotogrammetric images of patient teeth. This study tested whether reducing this shininess by coating the teeth with titanium-oxide powder might improve the accuracy of the matching procedure. Twenty patients participated in the study. For each patient, 3D stereophotogrammetric images were taken without and with a powder coating. Separately, digital dental casts were created. Next, the digital dental casts were fused with the 3D stereophotogrammetric images of either non-powdered or powdered dentition. Distance maps were created to evaluate the inter-surface distance between the digital dental cast and the 3D images. The matching accuracy was compared for dentition with and without powdering. Of all recorded distances between corresponding points, 95% was smaller than 0.84mm for the powdered dentition and smaller than 0.90mm for the non-powdered dentition. Although powdered dentition showed significantly better matching than non-powdered dentition, the difference was less than 0.1mm. Intra-observer statistics showed that five out of 24 repetitions gave significantly different results, but only for dentition that was not powdered. The patients did not have any major malocclusions. Severe malocclusions might cause greater difficulty in matching the dentition without powder. Only one type of powder was used, but it effectively reduced shininess. Powdering the dentition had a small, but significant, positive effect on matching. However, this effect was of minor clinical importance. Therefore, we do not recommend powdering the dentition for 3D stereophotogrammetric images used for matching procedures. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  19. Increasing Transparency Through a Multiverse Analysis.

    PubMed

    Steegen, Sara; Tuerlinckx, Francis; Gelman, Andrew; Vanpaemel, Wolf

    2016-09-01

    Empirical research inevitably includes constructing a data set by processing raw data into a form ready for statistical analysis. Data processing often involves choices among several reasonable options for excluding, transforming, and coding data. We suggest that instead of performing only one analysis, researchers could perform a multiverse analysis, which involves performing all analyses across the whole set of alternatively processed data sets corresponding to a large set of reasonable scenarios. Using an example focusing on the effect of fertility on religiosity and political attitudes, we show that analyzing a single data set can be misleading and propose a multiverse analysis as an alternative practice. A multiverse analysis offers an idea of how much the conclusions change because of arbitrary choices in data construction and gives pointers as to which choices are most consequential in the fragility of the result. © The Author(s) 2016.

  20. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  1. A unification of models for meta-analysis of diagnostic accuracy studies without a gold standard.

    PubMed

    Liu, Yulun; Chen, Yong; Chu, Haitao

    2015-06-01

    Several statistical methods for meta-analysis of diagnostic accuracy studies have been discussed in the presence of a gold standard. However, in practice, the selected reference test may be imperfect due to measurement error, non-existence, invasive nature, or expensive cost of a gold standard. It has been suggested that treating an imperfect reference test as a gold standard can lead to substantial bias in the estimation of diagnostic test accuracy. Recently, two models have been proposed to account for imperfect reference test, namely, a multivariate generalized linear mixed model (MGLMM) and a hierarchical summary receiver operating characteristic (HSROC) model. Both models are very flexible in accounting for heterogeneity in accuracies of tests across studies as well as the dependence between tests. In this article, we show that these two models, although with different formulations, are closely related and are equivalent in the absence of study-level covariates. Furthermore, we provide the exact relations between the parameters of these two models and assumptions under which two models can be reduced to equivalent submodels. On the other hand, we show that some submodels of the MGLMM do not have corresponding equivalent submodels of the HSROC model, and vice versa. With three real examples, we illustrate the cases when fitting the MGLMM and HSROC models leads to equivalent submodels and hence identical inference, and the cases when the inferences from two models are slightly different. Our results generalize the important relations between the bivariate generalized linear mixed model and HSROC model when the reference test is a gold standard. © 2014, The International Biometric Society.

  2. Accuracy of clinical pallor in the diagnosis of anaemia in children: a meta-analysis

    PubMed Central

    Chalco, Juan P; Huicho, Luis; Alamo, Carlos; Carreazo, Nilton Y; Bada, Carlos A

    2005-01-01

    Background Anaemia is highly prevalent in children of developing countries. It is associated with impaired physical growth and mental development. Palmar pallor is recommended at primary level for diagnosing it, on the basis of few studies. The objective of the study was to systematically assess the accuracy of clinical signs in the diagnosis of anaemia in children. Methods A systematic review on the accuracy of clinical signs of anaemia in children. We performed an Internet search in various databases and an additional reference tracking. Studies had to be on performance of clinical signs in the diagnosis of anaemia, using haemoglobin as the gold standard. We calculated pooled diagnostic likelihood ratios (LR's) and odds ratios (DOR's) for each clinical sign at different haemoglobin thresholds. Results Eleven articles met the inclusion criteria. Most studies were performed in Africa, in children underfive. Chi-square test for proportions and Cochran Q for DOR's and for LR's showed heterogeneity. Type of observer and haemoglobin technique influenced the results. Pooling was done using the random effects model. Pooled DOR at haemoglobin <11 g/dL was 4.3 (95% CI 2.6–7.2) for palmar pallor, 3.7 (2.3–5.9) for conjunctival pallor, and 3.4 (1.8–6.3) for nailbed pallor. DOR's and LR's were slightly better for nailbed pallor at all other haemoglobin thresholds. The accuracy did not vary substantially after excluding outliers. Conclusion This meta-analysis did not document a highly accurate clinical sign of anaemia. In view of poor performance of clinical signs, universal iron supplementation may be an adequate control strategy in high prevalence areas. Further well-designed studies are needed in settings other than Africa. They should assess inter-observer variation, performance of combined clinical signs, phenotypic differences, and different degrees of anaemia. PMID:16336667

  3. Accuracy of clinical pallor in the diagnosis of anaemia in children: a meta-analysis.

    PubMed

    Chalco, Juan P; Huicho, Luis; Alamo, Carlos; Carreazo, Nilton Y; Bada, Carlos A

    2005-12-08

    Anaemia is highly prevalent in children of developing countries. It is associated with impaired physical growth and mental development. Palmar pallor is recommended at primary level for diagnosing it, on the basis of few studies. The objective of the study was to systematically assess the accuracy of clinical signs in the diagnosis of anaemia in children. A systematic review on the accuracy of clinical signs of anaemia in children. We performed an Internet search in various databases and an additional reference tracking. Studies had to be on performance of clinical signs in the diagnosis of anaemia, using haemoglobin as the gold standard. We calculated pooled diagnostic likelihood ratios (LR's) and odds ratios (DOR's) for each clinical sign at different haemoglobin thresholds. Eleven articles met the inclusion criteria. Most studies were performed in Africa, in children underfive. Chi-square test for proportions and Cochran Q for DOR's and for LR's showed heterogeneity. Type of observer and haemoglobin technique influenced the results. Pooling was done using the random effects model. Pooled DOR at haemoglobin <11 g/dL was 4.3 (95% CI 2.6-7.2) for palmar pallor, 3.7 (2.3-5.9) for conjunctival pallor, and 3.4 (1.8-6.3) for nailbed pallor. DOR's and LR's were slightly better for nailbed pallor at all other haemoglobin thresholds. The accuracy did not vary substantially after excluding outliers. This meta-analysis did not document a highly accurate clinical sign of anaemia. In view of poor performance of clinical signs, universal iron supplementation may be an adequate control strategy in high prevalence areas. Further well-designed studies are needed in settings other than Africa. They should assess inter-observer variation, performance of combined clinical signs, phenotypic differences, and different degrees of anaemia.

  4. Screening for bipolar spectrum disorders: A comprehensive meta-analysis of accuracy studies.

    PubMed

    Carvalho, André F; Takwoingi, Yemisi; Sales, Paulo Marcelo G; Soczynska, Joanna K; Köhler, Cristiano A; Freitas, Thiago H; Quevedo, João; Hyphantis, Thomas N; McIntyre, Roger S; Vieta, Eduard

    2015-02-01

    Bipolar spectrum disorders are frequently under-recognized and/or misdiagnosed in various settings. Several influential publications recommend the routine screening of bipolar disorder. A systematic review and meta-analysis of accuracy studies for the bipolar spectrum diagnostic scale (BSDS), the hypomania checklist (HCL-32) and the mood disorder questionnaire (MDQ) were performed. The Pubmed, EMBASE, Cochrane, PsycINFO and SCOPUS databases were searched. Studies were included if the accuracy properties of the screening measures were determined against a DSM or ICD-10 structured diagnostic interview. The QUADAS-2 tool was used to rate bias. Fifty three original studies met inclusion criteria (N=21,542). At recommended cutoffs, summary sensitivities were 81%, 66% and 69%, while specificities were 67%, 79% and 86% for the HCL-32, MDQ, and BSDS in psychiatric services, respectively. The HCL-32 was more accurate than the MDQ for the detection of type II bipolar disorder in mental health care centers (P=0.018). At a cutoff of 7, the MDQ had a summary sensitivity of 43% and a summary specificity of 95% for detection of bipolar disorder in primary care or general population settings. Most studies were performed in mental health care settings. Several included studies had a high risk of bias. Although accuracy properties of the three screening instruments did not consistently differ in mental health care services, the HCL-32 was more accurate than the MDQ for the detection of type II BD. More studies in other settings (for example, in primary care) are necessary. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The diagnostic accuracy of multi-frequency bioelectrical impedance analysis in diagnosing dehydration after stroke.

    PubMed

    Kafri, Mohannad W; Myint, Phyo Kway; Doherty, Danielle; Wilson, Alexander Hugh; Potter, John F; Hooper, Lee

    2013-07-10

    Non-invasive methods for detecting water-loss dehydration following acute stroke would be clinically useful. We evaluated the diagnostic accuracy of multi-frequency bioelectrical impedance analysis (MF-BIA) against reference standards serum osmolality and osmolarity. Patients admitted to an acute stroke unit were recruited. Blood samples for electrolytes and osmolality were taken within 20 minutes of MF-BIA. Total body water (TBW%), intracellular (ICW%) and extracellular water (ECW%), as percentages of total body weight, were calculated by MF-BIA equipment and from impedance measures using published equations for older people. These were compared to hydration status (based on serum osmolality and calculated osmolarity). The most promising Receiver Operating Characteristics curves were plotted. 27 stroke patients were recruited (mean age 71.3, SD10.7). Only a TBW% cut-off at 46% was consistent with current dehydration (serum osmolality >300 mOsm/kg) and TBW% at 47% impending dehydration (calculated osmolarity ≥295-300 mOsm/L) with sensitivity and specificity both >60%. Even here diagnostic accuracy of MF-BIA was poor, a third of those with dehydration were wrongly classified as hydrated and a third classified as dehydrated were well hydrated. Secondary analyses assessing diagnostic accuracy of TBW% for men and women separately, and using TBW as a percentage of lean body mass showed some promise, but did not provide diagnostically accurate measures across the population. MF-BIA appears ineffective at diagnosing water-loss dehydration after stroke and cannot be recommended as a test for dehydration, but separating assessment by sex, and using TBW as a percentage of lean body weight may warrant further investigation.

  6. Analysis of Bradley Fighting Vehicle Gunnery with Emphasis on Factors Affecting First-Round Accuracy of the 25-mm Gun

    DTIC Science & Technology

    1987-12-01

    ARI Research Note 87-67 OR FILE COPY ANALYSIS OF BRADLEY FIGHTING VEHICLE GUNNERY WITH EMPHASIS ON FACTORS AFFECTING FIRST-ROUND ACCURACY OF THE 25...of Bradley Fighting Vehicle Gunnery Jinal Deeport 1 with Emphasis on Factors Affecting First-Round January - December 1985 Accuracy of the 25-mm Gun 6...Determination Preliminary Gunnery Bradley Fighting Vehicle Zeroing Procedures -_Analysis of the problems and potential improvements in gunnery effectiveness

  7. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  8. Accuracy analysis of a mobile tracking system for angular position determination of flying targets

    NASA Astrophysics Data System (ADS)

    Walther, Andreas; Buske, Ivo; Riede, Wolfgang

    2016-10-01

    Lasers arouse an increasing interest in remote sensing applications. In order to deliver as much as possible of the available laser power onto a flying object the subsystems of a beam control system have to operate precisely together. One important subsystem is responsible for determination of the target's angular position. Here, we focus on an optical system for measuring precisely the angular position of flying objects. We designed this subunit of a beam control system exclusively from readily available commercial-off-the-shelf components. Two industrial cameras were used for angle measuring and for guiding the system to the position of the flying object. Both cameras are mounted on a modified astronomical mount with high-precision angle encoders. To achieve a high accuracy we temporally synchronize the acquisition of the angle from the pan tilt unit with the exposure of the camera. Therefore, a FPGA-based readout device for the rotary encoders was designed and implemented. Additionally, we determined and evaluated the influence of the distortion of the lenses to the measurement. We investigated various scenarios to determine the accuracy and the limitations of our system for angular position determination of flying targets. Performance tests were taken indoor and outdoor at our test sites. A target can be mounted on a fast moving linear stage. The position of this linear stage is continuously read out by a high resolution encoder so we know the target's position with a dynamic accuracy in the range of a few μm. With this setup we evaluated the spatial resolution of our tracking system. We showed that the presented system can determine the angular position of fast flying objects with an uncertainty of only 2 μrad RMS. With this mobile tracking system for angular position determination of flying targets we designed an accurate cost-efficient opportunity for further developments.

  9. Diagnostic Accuracy of Memory Measures in Alzheimer's Dementia and Mild Cognitive Impairment: a Systematic Review and Meta-Analysis.

    PubMed

    Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H

    2017-09-22

    With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.

  10. A content analysis of the quantity and accuracy of dietary supplement information found in magazines with high adolescent readership.

    PubMed

    Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth

    2009-02-01

    The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.

  11. Accuracy analysis of measurements on a stable power-law distributed series of events

    NASA Astrophysics Data System (ADS)

    Matthews, J. O.; Hopcraft, K. I.; Jakeman, E.; Siviour, G. B.

    2006-11-01

    We investigate how finite measurement time limits the accuracy with which the parameters of a stably distributed random series of events can be determined. The model process is generated by timing the emigration of individuals from a population that is subject to deaths and a particular choice of multiple immigration events. This leads to a scale-free discrete random process where customary measures, such as mean value and variance, do not exist. However, converting the number of events occurring in fixed time intervals to a 1-bit 'clipped' process allows the construction of well-behaved statistics that still retain vestiges of the original power-law and fluctuation properties. These statistics include the clipped mean and correlation function, from measurements of which both the power-law index of the distribution of events and the time constant of its fluctuations can be deduced. We report here a theoretical analysis of the accuracy of measurements of the mean of the clipped process. This indicates that, for a fixed experiment time, the error on measurements of the sample mean is minimized by an optimum choice of the number of samples. It is shown furthermore that this choice is sensitive to the power-law index and that the approach to Poisson statistics is dominated by rare events or 'outliers'. Our results are supported by numerical simulation.

  12. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  13. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  14. Accuracy and repeatability of the gait analysis by the WalkinSense system.

    PubMed

    de Castro, Marcelo P; Meucci, Marco; Soares, Denise P; Fonseca, Pedro; Borgonovo-Santos, Márcio; Sousa, Filipa; Machado, Leandro; Vilas-Boas, João Paulo

    2014-01-01

    WalkinSense is a new device designed to monitor walking. The aim of this study was to measure the accuracy and repeatability of the gait analysis performed by the WalkinSense system. Descriptions of values recorded by WalkinSense depicting typical gait in adults are also presented. A bench experiment using the Trublu calibration device was conducted to statically test the WalkinSense. Following this, a dynamic test was carried out overlapping the WalkinSense and the Pedar insoles in 40 healthy participants during walking. Pressure peak, pressure peak time, pressure-time integral, and mean pressure at eight-foot regions were calculated. In the bench experiments, the repeatability (i) among the WalkinSense sensors (within), (ii) between two WalkinSense devices, and (iii) between the WalkinSense and the Trublu devices was excellent. In the dynamic tests, the repeatability of the WalkinSense (i) between stances in the same trial (within-trial) and (ii) between trials was also excellent (ICC > 0.90). When the eight-foot regions were analyzed separately, the within-trial and between-trials repeatability was good-to-excellent in 88% (ICC > 0.80) of the data and fair in 11%. In short, the data suggest that the WalkinSense has good-to-excellent levels of accuracy and repeatability for plantar pressure variables.

  15. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis

    PubMed Central

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-01-01

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling’s proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77–172, I2 = 12.0%) and the AUC was 0.970 (95%CI: 0.958–0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871–0.940) and 0.912 (95%CI: 0.892–0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4–12.7) and 0.098 (95%CI 0.066–0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI. PMID:27418431

  16. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  17. Zagreb Amblyopia Preschool Screening Study: near and distance visual acuity testing increase the diagnostic accuracy of screening for amblyopia.

    PubMed

    Bušić, Mladen; Bjeloš, Mirjana; Petrovečki, Mladen; Kuzmanović Elabjer, Biljana; Bosnar, Damir; Ramić, Senad; Miletić, Daliborka; Andrijašević, Lidija; Kondža Krstonijević, Edita; Jakovljević, Vid; Bišćan Tvrdi, Ana; Predović, Jurica; Kokot, Antonio; Bišćan, Filip; Kovačević Ljubić, Mirna; Motušić Aras, Ranka

    2016-02-01

    To present and evaluate a new screening protocol for amblyopia in preschool children. Zagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic. 78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively. The ZAPS study used the most discriminative VA test with optotypes in line as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia.

  18. Zagreb Amblyopia Preschool Screening Study: near and distance visual acuity testing increase the diagnostic accuracy of screening for amblyopia

    PubMed Central

    Bušić, Mladen; Bjeloš, Mirjana; Petrovečki, Mladen; Kuzmanović Elabjer, Biljana; Bosnar, Damir; Ramić, Senad; Miletić, Daliborka; Andrijašević, Lidija; Kondža Krstonijević, Edita; Jakovljević, Vid; Bišćan Tvrdi, Ana; Predović, Jurica; Kokot, Antonio; Bišćan, Filip; Kovačević Ljubić, Mirna; Motušić Aras, Ranka

    2016-01-01

    Aim To present and evaluate a new screening protocol for amblyopia in preschool children. Methods Zagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic. Results 78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively. Conclusion The ZAPS study used the most discriminative VA test with optotypes in lines as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia. PMID:26935612

  19. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  20. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  1. modern global models of the earth's gravity field: analysis of their accuracy and resolution

    NASA Astrophysics Data System (ADS)

    Ganagina, Irina; Karpik, Alexander; Kanushin, Vadim; Goldobin, Denis; Kosareva, Alexandra; Kosarev, Nikolay; Mazurova, Elena

    2015-04-01

    Introduction: Accurate knowledge of the fine structure of the Earth's gravity field extends opportunities in geodynamic problem-solving and high-precision navigation. In the course of our investigations have been analyzed the resolution and accuracy of 33 modern global models of the Earth's gravity field and among them 23 combined models and 10 satellite models obtained by the results of GOCE, GRACE, and CHAMP satellite gravity mission. The Earth's geopotential model data in terms of normalized spherical harmonic coefficients were taken from the web-site of the International Centre for Global Earth Models (ICGEM) in Potsdam. Theory: Accuracy and resolution estimation of global Earth's gravity field models is based on the analysis of degree variances of geopotential coefficients and their errors. During investigations for analyzing models were obtained dependences of approximation errors for gravity anomalies on the spherical harmonic expansion of the geopotential, relative errors of geopotential's spherical harmonic coefficients, degree variances for geopotential coefficients, and error variances of potential coefficients obtained from gravity anomalies. Delphi 7-based software developed by authors was used for the analysis of global Earth's gravity field models. Experience: The results of investigations show that spherical harmonic coefficients of all matched. Diagrams of degree variances for spherical harmonic coefficients and their errors bring us to the conclusion that the degree variances of most models equal to their error variances for a degree less than that declared by developers. The accuracy of normalized spherical harmonic coefficients of geopotential models is estimated as 10-9. This value characterizes both inherent errors of models, and the difference of coefficients in various models, as well as a scale poor predicted instability of the geopotential, and resolution. Furthermore, we compared the gravity anomalies computed by models with those

  2. New random trigger-feature for ultrashort-pulsed laser increases throughput, accuracy and quality in micromachining applications

    NASA Astrophysics Data System (ADS)

    Oehler, Andreas; Ammann, Hubert; Benetti, Marco; Wassermann, Dominique; Jaeggi, Beat; Remund, Stefan; Neuenschwander, Beat

    2017-02-01

    For most micromachining applications, the laser focus has to be moved across the workpiece, either by steering the beam or by moving the workpiece. To maximize throughput, this movement should be as fast as possible. However, the required positioning accuracy often limits the obtainable speed. Especially the machining of small and complex features with high precision is constrained by the motion-system's maximum acceleration, limiting the obtainable moving spot velocity to very low values. In general, processing speed can vary widely within the same processing job. To obtain optimum quality at maximum throughput, ideally the pulse energy and the pulse-to-pulse pitch on the workpiece are kept constant. This is only possible if laser-pulses can be randomly triggered, synchronized to the current spot velocity. For ultrafast lasers this is not easily possible, as by design they are usually operated at a fixed pulse repetition rate. The pulse frequency can only be changed by dividing down with integer numbers which leads to a rather coarse frequency grid, especially when applied close to the maximum used operating frequency. This work reports on a new technique allowing random triggering of an ultrafast laser. The resulting timing uncertainty is less than ±25ns, which is negligible for real-world applications, energy stability is <2% rms. The technique allows using acceleration-ramps of the implemented motion system instead of applying additional override moves or skywriting techniques. This can reduce the processing time by up to 40%. Results of applying this technique to different processing geometries and strategies will be presented.

  3. A Mixed Methods and Triangulation Model for Increasing the Accuracy of Adherence and Sexual Behaviour Data: The Microbicides Development Programme

    PubMed Central

    Pool, Robert; Montgomery, Catherine M.; Morar, Neetha S.; Mweemba, Oliver; Ssali, Agnes; Gafos, Mitzy; Lees, Shelley; Stadler, Jonathan; Crook, Angela; Nunn, Andrew; Hayes, Richard; McCormack, Sheena

    2010-01-01

    of “trial culture” that may also affect data accuracy. PMID:20657778

  4. Accuracy in interpersonal expectations: a reflection-construction analysis of current and classic research.

    PubMed

    Jussim, L

    1993-12-01

    Research and theory on interpersonal expectations have been dominated by a strong social constructivist perspective arguing that expectancies are often inaccurate and a major force in the creation of social reality. The reflection-construction model is an attempt to examine these strong claims conceptually and empirically. This model assumes that social perception includes both constructivist phenomena and accuracy. When this model is used as a framework for interpreting research on teacher expectations and on the role of stereotypes in person perception, it shows that interpersonal expectancies are often accurate, and usually lead only to relatively small biases and self-fulfilling prophecies. The model also is used to interpret research on expectancies that has provided some of the foundations for the strong constructivist perspective. This reflection-construction analysis shows that even those studies strongly suggest that people's expectations generally will be highly accurate.

  5. On the accuracy of protein determination in large biological samples by prompt gamma neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Kasviki, K.; Stamatelatos, I. E.; Yannakopoulou, E.; Papadopoulou, P.; Kalef-Ezra, J.

    2007-10-01

    A prompt gamma neutron activation analysis (PGNAA) facility has been developed for the determination of nitrogen and thus total protein in large volume biological samples or the whole body of small animals. In the present work, the accuracy of nitrogen determination by PGNAA in phantoms of known composition as well as in four raw ground meat samples of about 1 kg mass was examined. Dumas combustion and Kjeldahl techniques were also used for the assessment of nitrogen concentration in the meat samples. No statistically significant differences were found between the concentrations assessed by the three techniques. The results of this work demonstrate the applicability of PGNAA for the assessment of total protein in biological samples of 0.25-1.5 kg mass, such as a meat sample or the body of small animal even in vivo with an equivalent radiation dose of about 40 mSv.

  6. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  7. Accuracy analysis by using WARIMA model to forecast TEC in China

    NASA Astrophysics Data System (ADS)

    Liu, Lilong; Chen, Jun; Wu, Pituan; Cai, Chenghui; Huang, Liangke

    2015-12-01

    Aiming at the characteristic of nonlinear and non-stationary in ionospheric total electron content(TEC), this article bring Wavelet Analysis into the autoregressive integrated moving average model to forecast the next four days' TEC values by using six days' ionospheric grid observation data of Chinese area in 2010 provided by IGS station. Taking IGS station's observation data as true value, compare the forecast value with it then count the forecast accuracies which are to prove that it has a quite good result by using WARIMA model to forecast Chinese area's Ionospheric grid data. But near the geomagnetic latitude of about +/-20°grid, the model's forecast results are a little worse than others' because Geomagnetic activity is irregular which lead to the TEC values there change greatly.

  8. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  9. Voxel-based analysis of (201)Tl SPECT for grading and diagnostic accuracy of gliomas: comparison with ROI analysis.

    PubMed

    Kuwako, Tomoyuki; Mizumura, Sunao; Murakami, Ryusuke; Yoshida, Tamiko; Shiiba, Masato; Sato, Hidetaka; Fukushima, Yoshimitsu; Teramoto, Akira; Kumita, Shin-Ichiro

    2013-07-01

    The aim of this retrospective study was to assess the utility of a voxel-based analysis (VBA) method for (201)Tl SPECT in glioma, compared to conventional ROI analysis. We recruited 24 patients with glioma (high-grade 15; low-grade 9), for whom pre-operative (201)Tl SPECT and MRI were performed. SPECT images were coregistered with MRI. The uptake ratio (UR) images of tumor to contralateral normal tissue were measured on early and delayed images, and the (201)Tl retention index (RI) map was calculated from the early and delayed uptake ratio maps. In the ROI analysis, tumors were traced on a UR map, and the mean and maximal uptake ratio values on the early images were, respectively, defined as the mean and maximal UR. The mean and maximal RI values (mean and maximal RI) were calculated by division of the mean and maximal UR, respectively, on the delayed image by the mean and maximal UR on the early image. For the RI map calculated voxel by voxel, the maximal RI value was defined as VBA-RI. We evaluated sensitivity and accuracy of differential analysis with the mean and maximal UR, RI, and VBA-RI. The high- and low-grade groups showed no significant difference in mean and maximal RI (0.98 ± 0.12 vs. 1.05 ± 0.09 and 0.98 ± 0.18 vs. 1.05 ± 0.14, respectively). The AUC and accuracy of the mean and maximal RI were 0.681 and 66.7 %, and 0.622 and 62.5 %, respectively. In contrast, VBA-RI was higher in high-grade than in low-grade glioma (1.69 ± 0.27 vs. 0.68 ± 0.66, p < 0.001). The AUC and accuracy of VBA-RI were 0.963 and 95.8 %, which are higher than those obtained for mean (p < 0.05) and maximal RI (p < 0.01). There was no significant difference in ROC between the VBA-RI and the mean UR (0.911, p = 0.456) and maximal UR (0.933, p = 0.639); however, the AUC, sensitivity, and diagnostic accuracy of VBA-RI were all higher than those of the mean and maximal UR. The voxel-based analysis method of (201)Tl SPECT may improve diagnostic performance for gliomas, compared

  10. Meta-analysis of accuracy of intraocular lens power calculation formulas in short eyes.

    PubMed

    Wang, Qiwei; Jiang, Wu; Lin, Tiao; Wu, Xiaohang; Lin, Haotian; Chen, Weirong

    2017-09-09

    Intraocular lens (IOL) power selection is a critical factor affecting visual outcome after IOL implantation in short eyes. Many formulas have been developed to achieve a precise prediction of the IOL power. However, controversy regarding the accuracy remains. To investigate the accuracy of different IOL power calculation formulas in short eyes. Meta-analysis PARTICIPANTS: Patients with the axial length of eyes less than 22mm from previously reported studies. A comprehensive search in Pubmed, EMBASE, Cochrane Data Base of Systematic Reviews and the Cochrane Central Register of Controlled Trials was conducted by October 2016. We assessed the methodological quality using a modified QUADAS-2 tool and performed analysis on weighted mean differences of mean absolute errors (MAE) among different formulas. the between-group difference of MAE was evaluated with weighted mean difference and 95% confidence intervals. Ten observational studies, involving 1161 eyes, were enrolled to compare six formulas: Haigis, Holladay 2, Hoffer Q, Holladay 1, SRK/T and SRK II. Among them, the Holladay 2 introduced the smallest overall MAE (0.496D) without statistical significance. The difference of MAE is statistically significant between Haigis and Hoffer Q (mean difference=-0.07D, p=0.003), Haigis and SRK/T (mean difference=-0.07D, p=0.009), Haigis and SRK II (mean difference=-0.41D, p=0.01). For publication bias and small-study effect, neither funnel plot nor egger's test detected statistical finding. The overall evidence from the studies confirmed the superiority of Haigis over Hoffer Q, SRK/T and SRK II in prediction IOL power in short eyes. This article is protected by copyright. All rights reserved.

  11. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  12. Increased Diagnostic Accuracy of Digital vs. Conventional Clock Drawing Test for Discrimination of Patients in the Early Course of Alzheimer’s Disease from Cognitively Healthy Individuals

    PubMed Central

    Müller, Stephan; Preische, Oliver; Heymann, Petra; Elbing, Ulrich; Laske, Christoph

    2017-01-01

    The conventional Clock Drawing Test (cCDT) is a rapid and inexpensive screening tool for detection of moderate and severe dementia. However, its usage is limited due to poor diagnostic accuracy especially in patients with mild cognitive impairment (MCI). The diagnostic value of a newly developed digital Clock Drawing Test (dCDT) was evaluated and compared with the cCDT in 20 patients with early dementia due to AD (eDAT), 30 patients with amnestic MCI (aMCI) and 20 cognitively healthy controls (HCs). Parameters assessed by dCDT were time while transitioning the stylus from one stroke to the next above the surface (i.e., time-in-air), time the stylus produced a visible stroke (i.e., time-on-surface) and total-time during clock drawing. Receiver-operating characteristic (ROC) curves were calculated and logistic regression analyses have been conducted for statistical analysis. Using dCDT, time-in-air was significantly increased in eDAT (70965.8 ms) compared to aMCI (54073.7 ms; p = 0.027) and HC (32315.6 ms; p < 0.001). In addition, time-in-air was significantly longer in patients with aMCI compared to HC (p = 0.003), even in the aMCI group with normal cCDT score (54141.8 ms; p < 0.001). Time-in-air using dCDT allowed discrimination of patients with aMCI from HCs with a sensitivity of 81.3% and a specificity of 72.2% while cCDT scoring revealed a sensitivity of 62.5% and a specificity of 83.3%. Most interestingly, time-in-air allowed even discrimination of aMCI patients with normal cCDT scores (80% from all aMCI patients) from HCs with a clinically relevant sensitivity of 80.8% and a specificity of 77.8%. A combination of dCDT variables and cCDT scores did not improve the discrimination of patients with aMCI from HC. In conclusion, assessment of time-in-air using dCDT yielded a higher diagnostic accuracy for discrimination of aMCI patients from HCs than the use of cCDT even in those aMCI patients with normal cCDT scores. Modern digitizing devices offer the opportunity

  13. PET/MR imaging of the pelvis in the presence of endoprostheses: reducing image artifacts and increasing accuracy through inpainting.

    PubMed

    Ladefoged, Claes Nøhr; Andersen, Flemming Littrup; Keller, Sune Høgild; Löfgren, Johan; Hansen, Adam Espe; Holm, Søren; Højgaard, Liselotte; Beyer, Thomas

    2013-04-01

    In combined whole-body PET/MR, attenuation correction (AC) is performed indirectly using the available MR image information and subsequent segmentation. Implant-induced susceptibility artifacts and subsequent signal voids may challenge MR-based AC (MR-AC). We evaluated the accuracy of MR-AC in PET/MR in patients with metallic endoprostheses, and propose a clinically feasible correction method. We selected patients with uni- or bilateral endoprostheses from 61 consecutive referrals for whole-body PET/MR imaging (mMR; Siemens Healthcare). Simultaneous whole-body PET/MR imaging was performed at 120 min after injection of about 300 MBq [(18)F]FDG. MR-AC was performed using (1) original MR images and subsequent Dixon water-fat segmentation, (2) as method 1 with implant-induced signal voids filled with soft tissue, (3) as method 2 with superimposed coregistered endoprostheses from the CT scan, and (4) as method 1 with implant-induced signal voids filled with metal. Following MR-AC (methods 1-4) PET emission images were reconstructed on 344 × 344 matrices using attenuation-weighted OSEM (three iterations, 21 subsets, 4 mm gaussian). Maximum body-weight normalized standardized uptake values (SUVmax) were obtained for both hips. Mean SUV (SUVmean) in homogeneous reference regions in the gluteal muscle and bladder following MR-AC (methods 1-4) are also reported. In total, four patients presented with endoprostheses, unilateral in two and bilateral in two. The fraction of voxels in MR images affected by the implant was at least twice that of the voxels representing the actual implants. MR-AC using methods 2 and 3 recovered the FDG distribution pattern compared to uncorrected PET images and method 1, while method 4 resulted in severe overestimation of FDG uptake (>460 % SUVmax). When compared to method 1, relative changes in SUVmean in the reference regions from method 2 and 3 were generally small albeit not correlated with the fraction of the attenuation image

  14. The Accuracy of Computerized Adaptive Testing in Heterogeneous Populations: A Mixture Item-Response Theory Analysis

    PubMed Central

    Kopec, Jacek A.; Wu, Amery D.; Zumbo, Bruno D.

    2016-01-01

    Background Computerized adaptive testing (CAT) utilizes latent variable measurement model parameters that are typically assumed to be equivalently applicable to all people. Biased latent variable scores may be obtained in samples that are heterogeneous with respect to a specified measurement model. We examined the implications of sample heterogeneity with respect to CAT-predicted patient-reported outcomes (PRO) scores for the measurement of pain. Methods A latent variable mixture modeling (LVMM) analysis was conducted using data collected from a heterogeneous sample of people in British Columbia, Canada, who were administered the 36 pain domain items of the CAT-5D-QOL. The fitted LVMM was then used to produce data for a simulation analysis. We evaluated bias by comparing the referent PRO scores of the LVMM with PRO scores predicted by a “conventional” CAT (ignoring heterogeneity) and a LVMM-based “mixture” CAT (accommodating heterogeneity). Results The LVMM analysis indicated support for three latent classes with class proportions of 0.25, 0.30 and 0.45, which suggests that the sample was heterogeneous. The simulation analyses revealed differences between the referent PRO scores and the PRO scores produced by the “conventional” CAT. The “mixture” CAT produced PRO scores that were nearly equivalent to the referent scores. Conclusion Bias in PRO scores based on latent variable models may result when population heterogeneity is ignored. Improved accuracy could be obtained by using CATs that are parameterized using LVMM. PMID:26930348

  15. Diagnostic accuracy of laser Doppler imaging in burn depth assessment: Systematic review and meta-analysis.

    PubMed

    Shin, Jin Yong; Yi, Hyung Suk

    2016-11-01

    Accurate assessment of burn depth is important for determination of treatment modality. Laser Doppler imaging (LDI) is known to be an objective and effective measurement tool in burn depth assessment. Our study evaluated the diagnostic accuracy of LDI across enrolled studies and subgroups. A systematic literature review and meta-analysis were performed using MEDLINE, EMBASE, and Cochrane databases. Data from LDI cases were extracted from all primary studies and categorized into four cell values (true positives, false positives, true negatives, and false negatives). Subgroup analyses were performed according to perfusion units of LDI, clinical criteria of superficial and deep burns during the treatment period, and publication date of enrolled studies. The search strategy identified 321 publications. After screening, 10 articles were selected for review. The pooled sensitivity and specificity of LDI in all enrolled studies and subgroups were found to be similarly high. However, the sensitivity of LDI in our meta-analysis was not as high as that identified in previous studies. Although LDI in burn depth assessment was identified as an accurate measurement tool in this meta-analysis, careful clinical assessment should be performed along with LDI in patients with deep burns. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  16. Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units.

    PubMed

    Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark

    2012-05-21

    As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system

  17. Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units

    NASA Astrophysics Data System (ADS)

    Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark

    2012-05-01

    As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system

  18. Numerical simulation for accuracy of velocity analysis in small-scale high-resolution marine multichannel seismic technology

    NASA Astrophysics Data System (ADS)

    Luo, Di; Cai, Feng; Wu, Zhiqiang

    2017-06-01

    When used with large energy sparkers, marine multichannel small-scale high-resolution seismic detection technology has a high resolution, high-detection precision, a wide applicable range, and is very flexible. Positive results have been achieved in submarine geological research, particularly in the investigation of marine gas hydrates. However, the amount of traveltime difference information is reduced for the velocity analysis under conditions of a shorter spread length, thus leading to poorer focusing of the velocity spectrum energy group and a lower accuracy of the velocity analysis. It is thus currently debatable whether the velocity analysis accuracy of short-arrangement multichannel seismic detection technology is able to meet the requirements of practical application in natural gas hydrate exploration. Therefore, in this study the bottom boundary of gas hydrates (Bottom Simulating Reflector, BSR) is used to conduct numerical simulation to discuss the accuracy of the velocity analysis related to such technology. Results show that a higher dominant frequency and smaller sampling interval are not only able to improve the seismic resolution, but they also compensate for the defects of the short-arrangement, thereby improving the accuracy of the velocity analysis. In conclusion, the accuracy of the velocity analysis in this small-scale, high-resolution, multi-channel seismic detection technology meets the requirements of natural gas hydrate exploration.

  19. Accuracy evaluation of Fourier series analysis and singular spectrum analysis for predicting the volume of motorcycle sales in Indonesia

    NASA Astrophysics Data System (ADS)

    Sasmita, Yoga; Darmawan, Gumgum

    2017-08-01

    This research aims to evaluate the performance of forecasting by Fourier Series Analysis (FSA) and Singular Spectrum Analysis (SSA) which are more explorative and not requiring parametric assumption. Those methods are applied to predicting the volume of motorcycle sales in Indonesia from January 2005 to December 2016 (monthly). Both models are suitable for seasonal and trend component data. Technically, FSA defines time domain as the result of trend and seasonal component in different frequencies which is difficult to identify in the time domain analysis. With the hidden period is 2,918 ≈ 3 and significant model order is 3, FSA model is used to predict testing data. Meanwhile, SSA has two main processes, decomposition and reconstruction. SSA decomposes the time series data into different components. The reconstruction process starts with grouping the decomposition result based on similarity period of each component in trajectory matrix. With the optimum of window length (L = 53) and grouping effect (r = 4), SSA predicting testing data. Forecasting accuracy evaluation is done based on Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). The result shows that in the next 12 month, SSA has MAPE = 13.54 percent, MAE = 61,168.43 and RMSE = 75,244.92 and FSA has MAPE = 28.19 percent, MAE = 119,718.43 and RMSE = 142,511.17. Therefore, to predict volume of motorcycle sales in the next period should use SSA method which has better performance based on its accuracy.

  20. A 3-dimensional accuracy analysis of chairside CAD/CAM milling processes.

    PubMed

    Bosch, Gabriel; Ender, Andreas; Mehl, Albert

    2014-12-01

    Milling is a central and important aspect of computer-aided design and computer-aided manufacturing (CAD/CAM) technology. High milling accuracy reduces the time needed to adapt the workpiece and provides restorations with better longevity and esthetic appeal. The influence of different milling processes on the accuracy of milled restorations has not yet been reviewed. The purpose of this study was to investigate the influence of different milling processes on the accuracy of ceramic restorations. Four groups of partial crowns were milled (each n = 17): Three groups in a 4-axial milling unit: (1) 1-step mode and Step Bur 12S (12S), (2) 1-step mode and Step Bur 12 (1Step), (3) 2-step mode and Step Bur 12 (2Step), and (4) one group in a 5-axial milling unit (5axis). The milled occlusal and inner surfaces were scanned and superimposed over the digital data sets of calculated restorations with specialized difference analysis software. The trueness of each restoration and each group was measured. One-way ANOVA with a post hoc Tukey test was used to compare the data (α = .05). The highest trueness for the inner surface was achieved in group 5axis (trueness, 41 ± 15 μm, P<.05). The 4-axial milling unit exhibited trueness at settings ranging from 61 μm (2Step) to 96 μm (12S). For the occlusal surface, the highest trueness was achieved with group 5axis (trueness, 42 ± 10 μm). The 4-axial milling unit exhibited trueness at settings ranging from 55 μm (1Step) to 76 μm (12S). Restorations milled with a 5-axial milling unit have a higher trueness than those milled with a 4-axial milling unit. A rotary cutting instrument with a smaller diameter results in a more accurate milling process. The 2-step mode is not significantly better than the 1-step mode. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  1. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  2. Accuracy of circulating adiponectin for predicting gestational diabetes: a systematic review and meta-analysis.

    PubMed

    Iliodromiti, Stamatina; Sassarini, Jennifer; Kelsey, Thomas W; Lindsay, Robert S; Sattar, Naveed; Nelson, Scott M

    2016-04-01

    Universal screening for gestational diabetes mellitus (GDM) has not been implemented, and this has had substantial clinical implications. Biomarker-directed targeted screening might be feasible. We sought to determine the accuracy of circulating adiponectin for early prediction of GDM. A systematic review and meta-analysis of the literature to May 2015 identified studies in which circulating adiponectin was measured prior to a diagnosis of GDM. Data on diagnostic accuracy were synthesised by bivariate mixed effects and hierarchical summary receiver operating characteristic (HSROC) models. Thirteen studies met the eligibility criteria, 11 of which (2,865 women; 794 diagnosed with GDM) had extractable data. Circulating adiponectin had a pooled diagnostic odds ratio (DOR) of 6.4 (95% CI 4.1, 9.9), a summary sensitivity of 64.7% (95% CI 51.0%, 76.4%) and a specificity of 77.8% (95% CI 66.4%, 86.1%) for predicting future GDM. The AUC of the HSROC was 0.78 (95% CI 0.74, 0.81). First trimester adiponectin had a pooled sensitivity of 60.3% (95% CI 46.0%, 73.1%), a specificity of 81.3% (95% CI 71.6%, 88.3%) and a DOR of 6.6 (95% CI 3.6, 12.1). The AUC was 0.79 (95% CI 0.75, 0.82). Pooled estimates were similar after adjustment for age, BMI or specific GDM diagnostic threshold. Pre-pregnancy and early pregnancy measurement of circulating adiponectin may improve the detection of women at high risk of developing GDM. Prospective evaluation of the combination of adiponectin and maternal characteristics for early identification of those who do and do not require OGTT is warranted.

  3. Summary of Glaucoma Diagnostic Testing Accuracy: An Evidence-Based Meta-Analysis

    PubMed Central

    Ahmed, Saad; Khan, Zainab; Si, Francie; Mao, Alex; Pan, Irene; Yazdi, Fatemeh; Tsertsvadze, Alexander; Hutnik, Cindy; Moher, David; Tingey, David; Trope, Graham E.; Damji, Karim F.; Tarride, Jean-Eric; Goeree, Ron; Hodge, William

    2016-01-01

    Background New glaucoma diagnostic technologies are penetrating clinical care and are changing rapidly. Having a systematic review of these technologies will help clinicians and decision makers and help identify gaps that need to be addressed. This systematic review studied five glaucoma technologies compared to the gold standard of white on white perimetry for glaucoma detection. Methods OVID® interface: MEDLINE® (In-Process & Other Non-Indexed Citations), EMBASE®, BIOSIS Previews®, CINAHL®, PubMed, and the Cochrane Library were searched. A gray literature search was also performed. A technical expert panel, information specialists, systematic review method experts and biostatisticians were used. A PRISMA flow diagram was created and a random effect meta-analysis was performed. Results A total of 2,474 articles were screened. The greatest accuracy was found with frequency doubling technology (FDT) (diagnostic odds ratio (DOR): 57.7) followed by blue on yellow perimetry (DOR: 46.7), optical coherence tomography (OCT) (DOR: 41.8), GDx (DOR: 32.4) and Heidelberg retina tomography (HRT) (DOR: 17.8). Of greatest concern is that tests for heterogeneity were all above 50%, indicating that cutoffs used in these newer technologies were all very varied and not uniform across studies. Conclusions Glaucoma content experts need to establish uniform cutoffs for these newer technologies, so that studies that compare these technologies can be interpreted more uniformly. Nevertheless, synthesized data at this time demonstrate that amongst the newest technologies, OCT has the highest glaucoma diagnostic accuracy followed by GDx and then HRT. PMID:27540437

  4. Summary of Glaucoma Diagnostic Testing Accuracy: An Evidence-Based Meta-Analysis.

    PubMed

    Ahmed, Saad; Khan, Zainab; Si, Francie; Mao, Alex; Pan, Irene; Yazdi, Fatemeh; Tsertsvadze, Alexander; Hutnik, Cindy; Moher, David; Tingey, David; Trope, Graham E; Damji, Karim F; Tarride, Jean-Eric; Goeree, Ron; Hodge, William

    2016-09-01

    New glaucoma diagnostic technologies are penetrating clinical care and are changing rapidly. Having a systematic review of these technologies will help clinicians and decision makers and help identify gaps that need to be addressed. This systematic review studied five glaucoma technologies compared to the gold standard of white on white perimetry for glaucoma detection. OVID(®) interface: MEDLINE(®) (In-Process & Other Non-Indexed Citations), EMBASE(®), BIOSIS Previews(®), CINAHL(®), PubMed, and the Cochrane Library were searched. A gray literature search was also performed. A technical expert panel, information specialists, systematic review method experts and biostatisticians were used. A PRISMA flow diagram was created and a random effect meta-analysis was performed. A total of 2,474 articles were screened. The greatest accuracy was found with frequency doubling technology (FDT) (diagnostic odds ratio (DOR): 57.7) followed by blue on yellow perimetry (DOR: 46.7), optical coherence tomography (OCT) (DOR: 41.8), GDx (DOR: 32.4) and Heidelberg retina tomography (HRT) (DOR: 17.8). Of greatest concern is that tests for heterogeneity were all above 50%, indicating that cutoffs used in these newer technologies were all very varied and not uniform across studies. Glaucoma content experts need to establish uniform cutoffs for these newer technologies, so that studies that compare these technologies can be interpreted more uniformly. Nevertheless, synthesized data at this time demonstrate that amongst the newest technologies, OCT has the highest glaucoma diagnostic accuracy followed by GDx and then HRT.

  5. Diagnostic accuracy of transesophageal echocardiogram for the detection of patent foramen ovale: a meta-analysis.

    PubMed

    Mojadidi, Mohammad Khalid; Bogush, Nikolay; Caceres, Jose Diego; Msaouel, Pavlos; Tobis, Jonathan M

    2014-07-01

    Patent foramen ovale (PFO) is a remnant of the fetal circulation present in 20% of the population. Right-to-left shunting (RLS) through a PFO has been linked to the pathophysiology of stroke, migraine with aura, and hypoxemia. While different imaging modalities including transcranial Doppler, intra-cardiac echo, and transthoracic echo (TTE) have often been used to detect RLS, transesophageal echo (TEE) bubble study remains the gold standard for diagnosing PFO. The aim of this study was to determine the relative accuracy of TEE in the detection of PFO. A systematic review of Medline, using a standard approach for meta-analysis, was performed for all prospective studies assessing accuracy of TEE in the detection of PFO using confirmation by autopsy, cardiac surgery, and/or catheterization as the reference. Search results revealed 3105 studies; 4 met inclusion criteria. A total of 164 patients were included. TEE had a weighted sensitivity of 89.2% (95% CI: 81.1-94.7%) and specificity of 91.4% (95% CI: 82.3-96.8%) to detect PFO. The overall positive likelihood ratio (LR+) was 5.93 (95% CI: 1.30-27.09) and the overall negative likelihood ratio (LR-) was 0.22 (95% CI: 0.08-0.56). While TEE bubble study is considered to be the gold standard modality for diagnosing PFO, some PFOs may still be missed or misdiagnosed. It is important to understand the limitations of TEE and perhaps use other highly sensitive screening tests, such as transcranial doppler (TCD), in conjunction with TEE before scheduling a patient for transcatheter PFO closure. © 2013, Wiley Periodicals, Inc.

  6. Subacute and Chronic Left Ventricular Myocardial Scar: Accuracy of Texture Analysis on Nonenhanced Cine MR Images.

    PubMed

    Baessler, Bettina; Mannil, Manoj; Oebel, Sabrina; Maintz, David; Alkadhi, Hatem; Manka, Robert

    2017-08-23

    Purpose To test whether texture analysis (TA) allows for the diagnosis of subacute and chronic myocardial infarction (MI) on noncontrast material-enhanced cine cardiac magnetic resonance (MR) images. Materials and Methods In this retrospective, institutional review board-approved study, 120 patients who underwent cardiac MR imaging and showed large transmural (volume of enhancement on late gadolinium enhancement [LGE] images >20%, n = 72) or small (enhanced volume ≤20%, n = 48) subacute or chronic ischemic scars were included. Sixty patients with normal cardiac MR imaging findings served as control subjects. Regions of interest for TA encompassing the left ventricle were drawn by two blinded, independent readers on cine images in end systole by using a freely available software package. Stepwise dimension reduction and texture feature selection based on reproducibility, machine learning, and correlation analyses were performed for selecting features, enabling the diagnosis of MI on nonenhanced cine MR images by using LGE imaging as the standard of reference. Results Five independent texture features allowed for differentiation between ischemic scar and normal myocardium on cine MR images in both subgroups: Teta1, Perc.01, Variance, WavEnHH.s-3, and S(5,5)SumEntrp (in patients with large MI: all P values < .001; in patients with small MI: Teta1 and Perc.01, P < .001; Variance, P = .026; WavEnHH.s-3, P = .007; S[5,5]SumEntrp, P = .045). Multiple logistic regression models revealed that combining the features Teta1 and Perc.01 resulted in the highest accuracy for diagnosing large and small MI on cine MR images, with an area under the curve of 0.93 and 0.92, respectively. Conclusion This proof-of-concept study indicates that TA of nonenhanced cine MR images allows for the diagnosis of subacute and chronic MI with high accuracy. (©) RSNA, 2017 Online supplemental material is available for this article.

  7. Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit

    NASA Astrophysics Data System (ADS)

    Sikharulidze, Yu. G.; Korchagin, A. N.

    2002-01-01

    The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.

  8. Accuracy of Computed Tomography-Based Navigation-Assisted Total Knee Arthroplasty: Outlier Analysis.

    PubMed

    Miyasaka, Teruyuki; Kurosaka, Daisaburo; Saito, Mitsuru; Omori, Toshiyuki; Ikeda, Ryo; Marumo, Keishi

    2017-01-01

    Achieving neutral limb alignment during total knee arthroplasty (TKA) has been identified as a potential factor in long-term prosthesis survival. This study aimed to analyze the accuracy of component orientation and postoperative alignment of the leg after computed tomography (CT)-based navigation-assisted TKA, compare these parameters with those of a conventional technique, and analyze differences in the data of outliers. We retrospectively compared the alignment of 130 TKAs performed with a CT-based navigation system with that of 67 arthroplasties done with a conventional system. The knee joints were evaluated using radiographs. Mean hip-knee-ankle (HKA) angle, frontal femoral component angle, and frontal tibial component angle were 180.7°, 88.8°, and 90.6°, respectively, for the navigation-assisted arthroplasties and 181.1°, 88.7°, and 90.2°, respectively, for the conventional arthroplasties. All preoperative leg axes of 10 outliers in the navigation group were >193°, whereas the data of 17 outliers in the conventional group were scattered. This study demonstrates significant improvements in component positioning with the CT-based navigation system. Furthermore, when analyzing cases with preoperative HKA angles ≤192°, no outliers were found in the navigation group, indicating high alignment accuracy. However, in cases with preoperative HKA angles ≥193°, outliers were found in both groups, and no significant difference between the groups was observed (P = .08). Detailed analysis of the outlier cases in the navigation group revealed that the femoral component was placed in the varus position. These findings indicate that the varus knee is an important factor influencing accurate positioning of the femoral component and the postoperative leg axis. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Analysis of linear measurement accuracy obtained by cone beam computed tomography (CBCT-NewTom VG).

    PubMed

    Moshfeghi, Mahkameh; Tavakoli, Mohammad Amin; Hosseini, Ehsan Tavakoli; Hosseini, Ali Tavakoli; Hosseini, Iman Tavakoli

    2012-12-01

    One of the major uses of cone beam computed tomography (CBCT) is presurgical implant planning. Linear measurement is used for the determination of the quantity of alveolar bone (height and width). Linear measurements are used in orthodontic analysis and definition of jaw tumor size. The objective of this study is to evaluate the accuracy of the linear measurement of CBCT (Newtom VG) in the axial and coronal planes, with two different voxel sizes. In this accuracy diagnostic study, 22 anatomic landmarks in four dry human skulls were marked by gutta-percha. Fifteen linear measurements were obtained using a digital caliper. These were considered to be the gold standard (real measurement). The skulls were scanned by CBCT (Newtom VG) at two settings: (a) Voxel size 0.3 mm (b) voxel size 0.15 mm High Resolution (HR). The radiographic distance measurements were made in the axial and coronal sections by three observers. The radiographic measurements were repeated two weeks later for evaluation of intraobserver reliability. SPSS software version 17 was used for data analysis. The level of significance was considered to be 5% (P ≤ 0.05). The mean differences of real and radiographic measurements were -0.10±0.99 mm in the axial sections, -0.27±1.07 mm in the coronal sections, +0.14±1.44 mm in the axial (HR) sections, and 0.02±1.4 mm in the coronal (HR) sections. The intraclass correlation (ICC) for CBCT measurements in the axial sections was 0.9944, coronal sections 0.9941, axial (HR) sections 0.9935, and coronal (HR) sections 0.9937. The statistical analysis showed high interobserver and intraobserver reliability (P ≤ 0.05). CBCT (Newtom VG) is highly accurate and reproducible in linear measurements in the axial and coronal image planes and in different areas of the maxillofacial region. According to the findings of the present study, a CBCT scan with a larger voxel size (0.3 mm in comparison to 0.15 mm) is recommended when the purpose of the CBCT scan is to measure

  10. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  11. Stringent mating-type-regulated auxotrophy increases the accuracy of systematic genetic interaction screens with Saccharomyces cerevisiae mutant arrays.

    PubMed

    Singh, Indira; Pass, Rebecca; Togay, Sine Ozmen; Rodgers, John W; Hartman, John L

    2009-01-01

    A genomic collection of haploid Saccharomyces cerevisiae deletion strains provides a unique resource for systematic analysis of gene interactions. Double-mutant haploid strains can be constructed by the synthetic genetic array (SGA) method, wherein a query mutation is introduced by mating to mutant arrays, selection of diploid double mutants, induction of meiosis, and selection of recombinant haploid double-mutant progeny. The mechanism of haploid selection is mating-type-regulated auxotrophy (MRA), by which prototrophy is restricted to a particular haploid genotype generated only as a result of meiosis. MRA escape leads to false-negative genetic interaction results because postmeiotic haploids that are supposed to be under negative selection instead proliferate and mate, forming diploids that are heterozygous at interacting loci, masking phenotypes that would be observed in a pure haploid double-mutant culture. This work identified factors that reduce MRA escape, including insertion of terminator and repressor sequences upstream of the MRA cassette, deletion of silent mating-type loci, and utilization of alpha-type instead of a-type MRA. Modifications engineered to reduce haploid MRA escape reduced false negative results in SGA-type analysis, resulting in >95% sensitivity for detecting gene-gene interactions.

  12. Accuracy in Rietveld quantitative phase analysis: a comparative study of strictly monochromatic Mo and Cu radiations.

    PubMed

    León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G

    2016-06-01

    This study reports 78 Rietveld quantitative phase analyses using Cu Kα1, Mo Kα1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo Kα1 radiation have slightly better accuracies than those obtained from Cu Kα1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.

  13. Accuracy of implant impressions without impression copings: a three-dimensional analysis.

    PubMed

    Kwon, Joo-Hyun; Son, Yong-Ha; Han, Chong-Hyun; Kim, Sunjai

    2011-06-01

    Implant impressions without impression copings can be used for cement-retained implant restorations. A comparison of the accuracy of implant impressions with and without impression copings is needed. The purpose of this study was to evaluate and compare the dimensional accuracy of implant definitive casts that are fabricated by implant impressions with and without impression copings. An acrylic resin maxillary model was fabricated, and 3 implant replicas were secured in the right second premolar, first, and second molars. Two impression techniques were used to fabricate definitive casts (n=10). For the coping group (Group C), open tray impression copings were used for the final impressions. For the no-coping group (Group NC), cementable abutments were connected to the implant replicas, and final impressions were made assuming the abutments were prepared teeth. Computerized calculation of the centroids and long axes of the implant or stone abutment replicas was performed. The Mann-Whitney U test analyzed the amount of linear and rotational distortion between groups (α =.05). At the first molar site, Group NC showed significantly greater linear distortion along the Y-axis, with a small difference between the groups (Group C, 7.8 ± 7.4 μm; Group NC, 19.5 ± 12.2). At the second molar site, increased distortion was noted in Group NC for every linear and rotational variable, except for linear distortion along the Z-axis. Implant impression with open tray impression copings produced more accurate definitive casts than those fabricated without impression copings, especially those with greater inter-abutment distance. Copyright © 2011 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  14. The requirements for the future e-beam mask writer: statistical analysis of pattern accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hee; Choi, Jin; Kim, Hee Bom; Kim, Byung Gook; Cho, Han-Ku

    2011-11-01

    As semiconductor features shrink in size and pitch, the extreme control of CD uniformity, MTT and image placement is needed for mask fabrication with e-beam lithography. Among the many sources of CD and image placement error, the error resulting from e-beam mask writer becomes more important than before. CD and positioning error by e-beam mask writer is mainly related to the imperfection of e-beam deflection accuracy in optic system and the charging and contamination of column. To avoid these errors, the e-beam mask writer should be designed taking into account for these effects. However, the writing speed is considered for machine design with the highest priority, because the e-beam shot count is increased rapidly due to design shrink and aggressive OPC. The increment of shot count can make the pattern shift problem due to statistical issue resulting from e-beam deflection error and the total shot count in layout. And it affects the quality of CD and image placement too. In this report, the statistical approach on CD and image placement error caused by e-beam shot position error is presented. It is estimated for various writing conditions including the intrinsic e-beam positioning error of VSB writer. From the simulation study, the required e-beam shot position accuracy to avoid pattern shift problem in 22nm node and beyond is estimated taking into account for total shot count. And the required local CD uniformity is calculated for various e-beam writing conditions. The image placement error is also simulated for various conditions including e-beam writing field position error. Consequently, the requirements for the future e-beam mask writer and the writing conditions are discussed. And in terms of e-beam shot noise, LER caused by exposure dose and shot position error is studied for future e-beam mask writing for 22nm node and beyond.

  15. In Vivo Analysis of Human T-Cell Leukemia Virus Type 1 Reverse Transcription Accuracy

    PubMed Central

    Mansky, Louis M.

    2000-01-01

    Several studies have indicated that the genetic diversity of human T-cell leukemia virus type 1 (HTLV-1), a virus associated with adult T-cell leukemia, is significantly lower than that of other retroviruses, including that of human immunodeficiency virus type 1 (HIV-1). To test whether HTLV-1 variation is lower than other retroviruses, a tractable vector system has been developed to measure reverse transcription accuracy in one round of HTLV-1 replication. This system consists of a HTLV-1 vector that contains a cassette with the neomycin phosphotransferase (neo) gene, a bacterial origin of DNA replication, and the lacZα peptide gene region (the mutational target). The vector was replicated by trans-complementation with helper plasmids. The in vivo mutation rate for HTLV-1 was determined to be 7 × 10−6 mutations per target base pair per replication cycle. The majority of the mutations identified were base substitution mutations, namely, G-to-A and C-to-T transitions, frameshift mutations, and deletion mutations. Mutation of the methionine residue in the conserved YMDD motif of the HTLV-1 reverse transcriptase to either alanine or valine (i.e., M188A or M188V) led to a factor of two increase in the rate of mutation, indicating the role of this motif in enzyme accuracy. The HTLV-1 in vivo mutation rate is comparable to that of bovine leukemia virus (BLV), another member of the HTLV/BLV genus of retroviruses, and is about fourfold lower than that of HIV-1. These observations indicate that while the mutation rate of HTLV-1 is significantly lower than HIV-1, this lower rate alone would not explain the low diversity in HTLV-1 isolates, supporting the hypothesis that HTLV-1 replicates primarily as a provirus during cellular DNA replication rather than as a virus via reverse transcription. PMID:11000222

  16. Accuracy in Rietveld quantitative phase analysis: a comparative study of strictly monochromatic Mo and Cu radiations

    PubMed Central

    León-Reina, L.; García-Maté, M.; Álvarez-Pinazo, G.; Santacruz, I.; Vallcorba, O.; De la Torre, A. G.; Aranda, M. A. G.

    2016-01-01

    This study reports 78 Rietveld quantitative phase analyses using Cu Kα1, Mo Kα1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo Kα1 radiation have slightly better accuracies than those obtained from Cu Kα1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback–Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%. PMID:27275132

  17. Treatment planning using MRI data: an analysis of the dose calculation accuracy for different treatment regions

    PubMed Central

    2010-01-01

    Background Because of superior soft tissue contrast, the use of magnetic resonance imaging (MRI) as a complement to computed tomography (CT) in the target definition procedure for radiotherapy is increasing. To keep the workflow simple and cost effective and to reduce patient dose, it is natural to strive for a treatment planning procedure based entirely on MRI. In the present study, we investigate the dose calculation accuracy for different treatment regions when using bulk density assignments on MRI data and compare it to treatment planning that uses CT data. Methods MR and CT data were collected retrospectively for 40 patients with prostate, lung, head and neck, or brain cancers. Comparisons were made between calculations on CT data with and without inhomogeneity corrections and on MRI or CT data with bulk density assignments. The bulk densities were assigned using manual segmentation of tissue, bone, lung, and air cavities. Results The deviations between calculations on CT data with inhomogeneity correction and on bulk density assigned MR data were small. The maximum difference in the number of monitor units required to reach the prescribed dose was 1.6%. This result also includes effects of possible geometrical distortions. Conclusions The dose calculation accuracy at the investigated treatment sites is not significantly compromised when using MRI data when adequate bulk density assignments are made. With respect to treatment planning, MRI can replace CT in all steps of the treatment workflow, reducing the radiation exposure to the patient, removing any systematic registration errors that may occur when combining MR and CT, and decreasing time and cost for the extra CT investigation. PMID:20591179

  18. Semi-automatic software increases CT measurement accuracy but not response classification of colorectal liver metastases after chemotherapy.

    PubMed

    van Kessel, Charlotte S; van Leeuwen, Maarten S; Witteveen, Petronella O; Kwee, Thomas C; Verkooijen, Helena M; van Hillegersberg, Richard

    2012-10-01

    This study evaluates intra- and interobserver variability of automatic diameter and volume measurements of colorectal liver metastases (CRLM) before and after chemotherapy and its influence on response classification. Pre-and post-chemotherapy CT-scans of 33 patients with 138 CRLM were evaluated. Two observers measured all metastases three times on pre-and post-chemotherapy CT-scans, using three different techniques: manual diameter (MD), automatic diameter (AD) and automatic volume (AV). RECIST 1.0 criteria were used to define response classification. For each technique, we assessed intra- and interobserver reliability by determining the intraclass correlation coefficient (α-level 0.05). Intra-observer agreement was estimated by the variance coefficient (%). For inter-observer agreement the relative measurement error (%) was calculated using Bland-Altman analysis. In addition, we compared agreement in response classification by calculating kappa-scores (κ) and estimating proportions of discordance between methods (%). Intra-observer variability was 6.05%, 4.28% and 12.72% for MD, AD and AV, respectively. Inter-observer variability was 4.23%, 2.02% and 14.86% for MD, AD and AV, respectively. Chemotherapy marginally affected these estimates. Agreement in response classification did not improve using AD or AV (MD κ=0.653, AD κ=0.548, AV κ=0.548) and substantial discordance between observers was observed with all three methods (MD 17.8%, AD 22.2%, AV 22.2%). Semi-automatic software allows repeatable and reproducible measurement of both diameter and volume measurements of CRLM, but does not reduce variability in response classification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. Concordance analysis and diagnostic test accuracy review of IDH1 immunohistochemistry in glioblastoma.

    PubMed

    Pyo, Jung-Soo; Kim, Nae Yu; Kim, Roy Hyun Jai; Kang, Guhyun

    2016-10-01

    The study investigated isocitrate dehydrogenase (IDH) 1 immunohistochemistry (IHC) positive rate and concordance rate between IDH1 IHC and molecular test in glioblastoma. The current study included 1360 glioblastoma cases from sixteen eligible studies. Meta-analysis, including subgroup analysis by antibody clones and cut-off values, for IDH1 IHC positive rate was conducted. In addition, we performed a concordance analysis and diagnostic test accuracy review between IDH1 IHC and molecular tests. The estimated rates of IDH1 IHC were 0.106 [95 % confidence interval (CI) 0.085-0.132]. The IDH1 IHC positive rate of primary and secondary glioblastomas was 0.049 (95 % CI 0.023-0.99) and 0.729 (95 % CI 0.477-0.889), respectively. The overall concordance rate between IDH1 IHC and molecular test was 0.947 (95 % CI 0.878-0.978). In IDH1 IHC-positive and negative subgroups, the concordance rate was 0.842 (95 % CI 0.591-0.952) and 0.982 (95 % CI 0.941-0.995), respectively. The pooled sensitivity and specificity for IDH1 IHC were 1.00 (95 % CI 0.82-1.00) and 0.99 (95 % CI 0.96-1.00), respectively. IDH1 IHC is an accurate test for IDH1 mutation in glioblastoma patients. Further cumulative studies for evaluation criteria of IDH1 IHC will determine how to best apply this approach in daily practice.

  20. Diagnostic Accuracy of Transcranial Doppler for Brain Death Confirmation: Systematic Review and Meta-Analysis.

    PubMed

    Chang, J J; Tsivgoulis, G; Katsanos, A H; Malkoff, M D; Alexandrov, A V

    2016-03-01

    Transcranial Doppler is a useful ancillary test for brain death confirmation because it is safe, noninvasive, and done at the bedside. Transcranial Doppler confirms brain death by evaluating cerebral circulatory arrest. Case series studies have generally reported good correlations between transcranial Doppler confirmation of cerebral circulatory arrest and clinical confirmation of brain death. The purpose of this study is to evaluate the utility of transcranial Doppler as an ancillary test in brain death confirmation. We conducted a systematic review of the literature and a diagnostic test accuracy meta-analysis to compare the sensitivity and specificity of transcranial Doppler confirmation of cerebral circulatory arrest, by using clinical confirmation of brain death as the criterion standard. We identified 22 eligible studies (1671 patients total), dating from 1987 to 2014. Pooled sensitivity and specificity estimates from 12 study protocols that reported data for the calculation of both values were 0.90 (95% CI, 0.87-0.92) and 0.98 (95% CI, 0.96-0.99), respectively. Between-study differences in the diagnostic performance of transcranial Doppler were found for both sensitivity (I(2) = 76%; P < .001) and specificity (I(2) = 74.3%; P < .001). The threshold effect was not significant (Spearman r = -0.173; P = .612). The area under the curve with the corresponding standard error (SE) was 0.964 ± 0.018, while index Q test ± SE was estimated at 0.910 ± 0.028. The results of this meta-analysis suggest that transcranial Doppler is a highly accurate ancillary test for brain death confirmation. However, transcranial Doppler evaluates cerebral circulatory arrest rather than brain stem function, and this limitation needs to be taken into account when interpreting the results of this meta-analysis. © 2016 by American Journal of Neuroradiology.

  1. Assessment of accuracy of immediate blood separation method: a novel blood analysis strategy

    PubMed Central

    Nakayama, Kunio

    2010-01-01

    Objectives This study assesses the accuracy of the immediate blood separation method, a novel blood sampling strategy that enables blood analysis in any possible location. Methods We conducted a cross-validation study between data from immediate blood separation and conventional methods. During the annual medical examinations in 2006 of a company located in an Osaka suburb, blood was drawn from workers (n = 256; males 200, females 56) by puncturing their middle finger as well as venipuncture of the antecubital vein, by medical personnel. The following nine parameters were evaluated by autoanalyzer: aspartate aminotransferase (AST), alanine aminotransferase (ALT), γ-glutamyl transpeptidase (γGT), triglyceride, total cholesterol, high-density lipoprotein (HDL) cholesterol, urea nitrogen, uric acid, and creatinine. Results After comparing data from the two methods using correlation analysis and regression analysis, we found a close R2 value (coefficient of determination) relationship that ranged from 0.996 to 1.000 for each item. The R2 value was 0.998 for Log AST, 0.997 for Log ALT, 0.999 for Log γGT, 1.000 for Log triglyceride, 1.000 for total cholesterol, 0.999 for HDL cholesterol, 0.998 for urea nitrogen, 0.999 for uric acid, and 0.996 for creatinine. Relationship was satisfactory for all nine items tested. Conclusion Our results prove the reliability of data from the immediate blood separation method in an occupational health setting. The method enables self-testing by medically unskilled people, which is an important process to prevent lifestyle-related diseases. PMID:21432211

  2. Evaluation of factors affecting the accuracy of impressions using quantitative surface analysis.

    PubMed

    Lee, I K; DeLong, R; Pintado, M R; Malik, R

    1995-01-01

    Impression material goes from a plastic to an elastic state during setting. Movement of the impression and excessive seating pressure during this transition can cause distortion in the impressions. The purpose of this study is to determine if the impression distortion is related to movement during setting or to distortion of the putty phase in the two-step impressioning technique. A master model of a maxillary quadrant of teeth was impressed using four different procedures: 1) one-step technique without movement (1S-NM); 2) one-step technique with movement (1S-M); 3) two-step technique without movement (2S-NM); and 4) two-step technique with movement (2S-M). An artificial oral environment and surface analysis technique of the Minnesota Dental Research Center for Biomaterials and Biomechanics were used to produce the impressions and measure their accuracy. A digitized image of the first premolar of the master model was aligned with a digitized image of the first premolar of each epoxy model using AnSur. The root mean squared difference (RMS) between the aligned images is a measure of the distortion. The corresponding RMS values for the different methods were: 1S-NM = 23.7 +/- 9.21; 1S-M = 20.4 +/- 3.9; 2S-NM = 20.5 +/- 7.7; 2S-M = 21.3 +/- 4.4. Statistical analysis using a two-way analysis of variance showed no difference at the 0.05 level of significance. Pairwise comparison using the Tukey method showed that neither technique (one-step vs two-step) nor movement is a significant factor. These results showed that low seating pressure will not cause any greater distortions in the two-step impression technique than in the one-step technique, and minor movement during the setting of the impression material will no cause distortion.

  3. Measuring Speech Recognition Proficiency: A Psychometric Analysis of Speed and Accuracy

    ERIC Educational Resources Information Center

    Rader, Martha H.; Bailey, Glenn A.; Kurth, Linda A.

    2008-01-01

    This study examined the validity of various measures of speed and accuracy for assessing proficiency in speech recognition. The study specifically compared two different word-count indices for speed and accuracy (the 5-stroke word and the 1.4-syllable standard word) on a timing administered to 114 speech recognition students measured at 1-, 2-,…

  4. Measuring Speech Recognition Proficiency: A Psychometric Analysis of Speed and Accuracy

    ERIC Educational Resources Information Center

    Rader, Martha H.; Bailey, Glenn A.; Kurth, Linda A.

    2008-01-01

    This study examined the validity of various measures of speed and accuracy for assessing proficiency in speech recognition. The study specifically compared two different word-count indices for speed and accuracy (the 5-stroke word and the 1.4-syllable standard word) on a timing administered to 114 speech recognition students measured at 1-, 2-,…

  5. Analysis of Measurement Accuracy for Craniovertebral Junction Pathology : Most Reliable Method for Cephalometric Analysis

    PubMed Central

    Lee, Ho Jin; Kim, Il Sup; Kwon, Jae Yeol; Lee, Sang Won

    2013-01-01

    Objective This study was designed to determine the most reliable cephalometric measurement technique in the normal population and patients with basilar invagination (BI). Methods Twenty-two lateral radiographs of BI patients and 25 lateral cervical radiographs of the age, sex-matched normal population were selected and measured on two separate occasions by three spine surgeons using six different measurements. Statistical analysis including intraclass correlation coefficient (ICC) was carried out using the SPSS software (V. 12.0). Results Redlund-Johnell and Modified (M)-Ranawat had a highest ICC score in both the normal and BI groups in the inter-observer study. The M-Ranawat method (0.83) had a highest ICC score in the normal group, and the Redlund-Johenll method (0.80) had a highest ICC score in the BI group in the intra-observer test. The McGregor line had a lowest ICC score and a poor ICC grade in both groups in the intra-observer study. Generally, the measurement method using the odontoid process did not produce consistent results due to inter and intra-observer differences in determining the position of the odontoid tip. Opisthion and caudal point of the occipital midline curve are somewhat ambiguous landmarks, which induce variable ICC scores. Conclusion On the contrary to other studies, Ranawat method had a lower ICC score in the inter-observer study. C2 end-plate and C1 arch can be the most reliable anatomical landmarks. PMID:24294449

  6. Accuracy analysis of direct georeferenced UAV images utilising low-cost navigation sensors

    NASA Astrophysics Data System (ADS)

    Briese, Christian; Wieser, Martin; Verhoeven, Geert; Glira, Philipp; Doneus, Michael; Pfeifer, Norbert

    2014-05-01

    control points should be used to improve the estimated values, especially to decrease the amount of systematic errors. For the bundle block adjustment the calibration of the camera and their temporal stability must be determined additionally. This contribution presents next to the theory a practical study on the accuracy analysis of direct georeferenced UAV imagery by low-cost navigation sensors. The analysis was carried out within the research project ARAP (automated (ortho)rectification of archaeological aerial photographs). The utilized UAS consists of the airplane "MAJA", manufactured by "Bormatec" (length: 1.2 m, wingspan: 2.2 m) equipped with the autopilot "ArduPilot Mega 2.5". For image acquisition the camera "Ricoh GR Digital IV" is utilised. The autopilot includes a GNSS receiver capable of DGPS (EGNOS), an inertial measurement system (INS), a barometer, and a magnetometer. In the study the achieved accuracies for the estimated position and orientation of the images are presented. The paper concludes with a summary of the remaining error sources and their possible corrections by applying further improvements on the utilised equipment and the direct georeferencing process.

  7. Accuracy of different oxygenation indices in estimating intrapulmonary shunting at increasing infusion rates of dobutamine in horses under general anaesthesia.

    PubMed

    Briganti, A; Portela, D A; Grasso, S; Sgorbini, M; Tayari, H; Bassini, J R Fusar; Vitale, V; Romano, M S; Crovace, A; Breghi, G; Staffieri, F

    2015-06-01

    The aim of this study was to evaluate the correlation of commonly used oxygenation indices with venous admixture (Qs/Qt) in anaesthetised horses under different infusion rates of dobutamine. Six female horses were anaesthetised with acepromazine, xylazine, diazepam, ketamine, and isoflurane, and then intubated and mechanically ventilated with 100% O2. A Swan-Ganz catheter was introduced into the left jugular vein and its tip advanced into the pulmonary artery. Horses received different standardised rates of dobutamine. For each horse, eight samples of arterial and mixed venous blood were simultaneously obtained at fixed times. Arterial and venous haemoglobin (Hb) concentration and O2 saturation, arterial oxygen partial pressure (PaO2), venous oxygen partial pressure (PvO2), and barometric pressure were measured. Arterial (CaO2), mixed venous (CvO2), and capillary (Cc'O2) oxygen contents were calculated using standard formulae. The correlations between F-shunt, arterial oxygen tension to fraction of inspired oxygen ratio (PaO2/FiO2), arterial to alveolar oxygen tension ratio (PaO2/PAO2), alveolar to arterial oxygen tension difference (P[A - a]O2), and respiratory index (P[A - a]O2/PaO2) were tested with linear regression analysis. The goodness-of-fit for each calculated formula was evaluated by means of the coefficient of determination (r(2)). The agreement between Qs/Qt and F-shunt was analysed with the Bland-Altman test. All tested oxygen tension-based indices were weakly correlated (r(2) < 0.2) with the Qs/Qt, whereas F-shunt showed a stronger correlation (r(2) = 0.73). F-shunt also showed substantial agreement with Qs/Qt independent of the dobutamine infusion rate. F-shunt better correlated with Qs/Qt than other oxygen indices in isoflurane-anaesthetised horses under different infusion rates of dobutamine. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. The psychology of intelligence analysis: drivers of prediction accuracy in world politics.

    PubMed

    Mellers, Barbara; Stone, Eric; Atanasov, Pavel; Rohrbaugh, Nick; Metz, S Emlen; Ungar, Lyle; Bishop, Michael M; Horowitz, Michael; Merkle, Ed; Tetlock, Philip

    2015-03-01

    This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  9. Diagnostic Accuracy of Lymphoscintigraphy for Lymphedema and Analysis of False-Negative Tests

    PubMed Central

    Hassanein, Aladdin H.; Maclellan, Reid A.; Grant, Frederick D.

    2017-01-01

    Background: Lymphedema is the chronic enlargement of tissue due to inadequate lymphatic function. Diagnosis is made by history and physical examination and confirmed with lymphoscintigraphy. The purpose of this study was to assess the accuracy of lymphoscintigraphy for the diagnosis of lymphedema and to determine characteristics of patients with false-negative tests. Methods: Individuals referred to our lymphedema program with “lymphedema” between 2009 and 2016 were analyzed. Subjects were assessed by history, physical examination, and lymphoscintigraphy. Patient age at presentation, duration of lymphedema, location of disease, gender, previous infections, and lymphedema type were analyzed. Results: The study included 227 patients (454 limbs); lymphedema was diagnosed clinically in 169 subjects and confirmed by lymphoscintigraphy in 162 (117 primary, 45 secondary; 96% sensitivity). Fifty-eight patients were thought to have a condition other than lymphedema, and all had negative lymphoscintigrams (100% specificity). A subgroup analysis of the 7 individuals with lymphedema clinically, but normal lymphoscintigrams, showed that all had primary lymphedema; duration of disease and infection history were not different between true-positive and false-negative lymphoscintigram results (P = 0.5). Two patients with a false-negative test underwent repeat lymphoscintigraphy, which then showed lymphatic dysfunction consistent with lymphedema. Conclusion: Lymphoscintigraphy is very sensitive and specific for lymphedema. All patients with false-negative studies had primary lymphedema. A patient with a high clinical suspicion of lymphedema and a normal lymphoscintigram should be treated conservatively for the disease and undergo repeat lymphoscintigraphy. PMID:28831342

  10. Accuracy of ionospheric models used in GNSS and SBAS: methodology and analysis

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, A.; Juan, J. M.; Sanz, J.; González-Casado, G.; Ibáñez, D.

    2016-03-01

    The characterization of the accuracy of ionospheric models currently used in global navigation satellite systems (GNSSs) is a long-standing issue. The characterization remains a challenging problem owing to the lack of sufficiently accurate slant ionospheric determinations to be used as a reference. The present study proposes a methodology based on the comparison of the predictions of any ionospheric model with actual unambiguous carrier-phase measurements from a global distribution of permanent receivers. The differences are separated as hardware delays (a receiver constant plus a satellite constant) per day. The present study was conducted for the entire year of 2014, i.e. during the last solar cycle maximum. The ionospheric models assessed are the operational models broadcast by the global positioning system (GPS) and Galileo constellations, the satellite-based augmentation system (SBAS) (i.e. European Geostationary Navigation Overlay System (EGNOS) and wide area augmentation system (WAAS)), a number of post-process global ionospheric maps (GIMs) from different International GNSS Service (IGS) analysis centres (ACs) and, finally, a more sophisticated GIM computed by the research group of Astronomy and GEomatics (gAGE). Ionospheric models based on GNSS data and represented on a grid (IGS GIMs or SBAS) correct about 85 % of the total slant ionospheric delay, whereas the models broadcasted in the navigation messages of GPS and Galileo only account for about 70 %. Our gAGE GIM is shown to correct 95 % of the delay. The proposed methodology appears to be a useful tool to improve current ionospheric models.

  11. The reliability, validity, and accuracy of self-reported absenteeism from work: a meta-analysis.

    PubMed

    Johns, Gary; Miraglia, Mariella

    2015-01-01

    Because of a variety of access limitations, self-reported absenteeism from work is often employed in research concerning health, organizational behavior, and economics, and it is ubiquitous in large scale population surveys in these domains. Several well established cognitive and social-motivational biases suggest that self-reports of absence will exhibit convergent validity with records-based measures but that people will tend to underreport the behavior. We used meta-analysis to summarize the reliability, validity, and accuracy of absence self-reports. The results suggested that self-reports of absenteeism offer adequate test-retest reliability and that they exhibit reasonably good rank order convergence with organizational records. However, people have a decided tendency to underreport their absenteeism, although such underreporting has decreased over time. Also, self-reports were more accurate when sickness absence rather than absence for any reason was probed. It is concluded that self-reported absenteeism might serve as a valid measure in some correlational research designs. However, when accurate knowledge of absolute absenteeism levels is essential, the tendency to underreport could result in flawed policy decisions.

  12. A novel method for crosstalk analysis of biological networks: improving accuracy of pathway annotation

    PubMed Central

    Ogris, Christoph; Guala, Dimitri; Helleday, Thomas; Sonnhammer, Erik L. L.

    2017-01-01

    Analyzing gene expression patterns is a mainstay to gain functional insights of biological systems. A plethora of tools exist to identify significant enrichment of pathways for a set of differentially expressed genes. Most tools analyze gene overlap between gene sets and are therefore severely hampered by the current state of pathway annotation, yet at the same time they run a high risk of false assignments. A way to improve both true positive and false positive rates (FPRs) is to use a functional association network and instead look for enrichment of network connections between gene sets. We present a new network crosstalk analysis method BinoX that determines the statistical significance of network link enrichment or depletion between gene sets, using the binomial distribution. This is a much more appropriate statistical model than previous methods have employed, and as a result BinoX yields substantially better true positive and FPRs than was possible before. A number of benchmarks were performed to assess the accuracy of BinoX and competing methods. We demonstrate examples of how BinoX finds many biologically meaningful pathway annotations for gene sets from cancer and other diseases, which are not found by other methods. BinoX is available at http://sonnhammer.org/BinoX. PMID:27664219

  13. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  14. Accuracy analysis of mimetic finite volume operators on geodesic grids and a consistent alternative

    NASA Astrophysics Data System (ADS)

    Peixoto, Pedro S.

    2016-04-01

    Many newly developed climate, weather and ocean global models are based on quasi-uniform spherical polygonal grids, aiming for high resolution and better scalability. Thuburn et al. (2009) and Ringler et al. (2010) developed a C staggered finite volume/difference method for arbitrary polygonal spherical grids suitable for these next generation dynamical cores. This method has many desirable mimetic properties and became popular, being adopted in some recent models, in spite of being known to possess low order of accuracy. In this work, we show that, for the nonlinear shallow water equations on non-uniform grids, the method has potentially 3 main sources of inconsistencies (local truncation errors not converging to zero as the grid is refined): (i) the divergence term of the continuity equation, (ii) the perpendicular velocity and (iii) the kinetic energy terms of the vector invariant form of the momentum equations. Although some of these inconsistencies have not impacted the convergence on some standard shallow water test cases up until now, they may constitute a potential problem for high resolution 3D models. Based on our analysis, we propose modifications for the method that will make it first order accurate in the maximum norm. It preserves many of the mimetic properties, albeit having non-steady geostrophic modes on the f-sphere. Experimental results show that the resulting model is a more accurate alternative to the existing formulations and should provide means of having a consistent, computationally cheap and scalable atmospheric or ocean model on C staggered Voronoi grids.

  15. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis

    PubMed Central

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  16. Methodological accuracy of digital and manual model analysis in orthodontics - A retrospective clinical study.

    PubMed

    Lippold, Carsten; Kirschneck, Christian; Schreiber, Kristina; Abukiress, Saleh; Tahvildari, Amir; Moiseenko, Tatjana; Danesh, Gholamreza

    2015-07-01

    Computer-based digital orthodontic models are available for clinicians, supplemented by dedicated software for performing required diagnostic measurements. The purpose of this study was to evaluate the accuracy of measurements made on three-dimensional digital models obtained with a CBCT-scanner (DigiModel™, OrthoProof(®), Nieuwegin, The Netherlands). 66 orthodontic dental casts of primary and early mixed dentitions were selected. Three-dimensional images were obtained on this CBCT-scanner and analyzed by means of the DigiModel™ software. Measurements were made with a digital caliper directly on the conventional casts and also digitally on the virtual models. 6 anatomic dental points were identified, and a total of 11 measurements were taken from each cast, including midline deviation, overjet, overbite and arch widths. Conformity of digital and manual measurements as well as intra-, inter- and repeated-measurement-reliability were evaluated by Lin's Concordance Correlation Coefficient, ICC and a Bland-Altman-analysis. The agreement and conformity of digital and manual measurements was substantial for all parameters evaluated. Intra-, inter- and repeated-measurement-reliability was excellent. Measurements on digital models obtained by a CBCT scan of conventional casts (DigiModel™, OrthoProof(®)) are suited for reliable diagnostic measurements, which compare well to those obtained from plaster casts, the current gold standard. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Analysis of Current Position Determination Accuracy in Natural Resources Canada Precise Point Positioning Service

    NASA Astrophysics Data System (ADS)

    Krzan, Grzegorz; Dawidowicz, Karol; Krzysztof, Świaţek

    2013-09-01

    Precise Point Positioning (PPP) is a technique used to determine highprecision position with a single GNSS receiver. Unlike DGPS or RTK, satellite observations conducted by the PPP technique are not differentiated, therefore they require that parameter models should be used in data processing, such as satellite clock and orbit corrections. Apart from explaining the theory of the PPP technique, this paper describes the available web-based online services used in the post-processing of observation results. The results obtained in the post-processing of satellite observations at three points, with different characteristics of environment conditions, using the CSRS-PPP service, will be presented as the results of the experiment. This study examines the effect of the duration of the measurement session on the results and compares the results obtained by working out observations made by the GPS system and the combined observations from GPS and GLONASS. It also presents the analysis of the position determination accuracy using one and two measurement frequencies

  18. An analysis of the accuracy of wearable sensors for classifying the causes of falls in humans.

    PubMed

    Aziz, Omar; Robinovitch, Stephen N

    2011-12-01

    Falls are the number one cause of injury in older adults. Wearable sensors, typically consisting of accelerometers and/or gyroscopes, represent a promising technology for preventing and mitigating the effects of falls. At present, the goal of such "ambulatory fall monitors" is to detect the occurrence of a fall and alert care providers to this event. Future systems may also provide information on the causes and circumstances of falls, to aid clinical diagnosis and targeting of interventions. As a first step towards this goal, the objective of the current study was to develop and evaluate the accuracy of a wearable sensor system for determining the causes of falls. Sixteen young adults participated in experimental trials involving falls due to slips, trips, and "other" causes of imbalance. Three-dimensional acceleration data acquired during the falling trials were input to a linear discriminant analysis technique. This routine achieved 96% sensitivity and 98% specificity in distinguishing the causes of a falls using acceleration data from three markers (left ankle, right ankle, and sternum). In contrast, a single marker provided 54% sensitivity and two markers provided 89% sensitivity. These results indicate the utility of a three-node accelerometer array for distinguishing the cause of falls.

  19. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  20. Accuracy of a remote quantitative image analysis in the whole slide images.

    PubMed

    Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr

    2011-03-30

    The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and

  1. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy: The Case of Neighbourhoods and Health

    PubMed Central

    Wagner, Philippe; Ghith, Nermin; Leckie, George

    2016-01-01

    Background and Aim Many multilevel logistic regression analyses of “neighbourhood and health” focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that distinguishes between “specific” (measures of association) and “general” (measures of variance) contextual effects. Performing two empirical examples we illustrate the methodology, interpret the results and discuss the implications of this kind of analysis in public health. Methods We analyse 43,291 individuals residing in 218 neighbourhoods in the city of Malmö, Sweden in 2006. We study two individual outcomes (psychotropic drug use and choice of private vs. public general practitioner, GP) for which the relative importance of neighbourhood as a source of individual variation differs substantially. In Step 1 of the analysis, we evaluate the OR and the area under the receiver operating characteristic (AUC) curve for individual-level covariates (i.e., age, sex and individual low income). In Step 2, we assess general contextual effects using the AUC. Finally, in Step 3 the OR for a specific neighbourhood characteristic (i.e., neighbourhood income) is interpreted jointly with the proportional change in variance (i.e., PCV) and the proportion of ORs in the opposite direction (POOR) statistics. Results For both outcomes, information on individual characteristics (Step 1) provide a low discriminatory accuracy (AUC = 0.616 for psychotropic drugs; = 0.600 for choosing a private GP). Accounting for neighbourhood of residence (Step 2) only improved the AUC for choosing a private GP (+0.295 units). High neighbourhood income (Step 3) was strongly associated to choosing a private GP (OR = 3.50) but the PCV was only 11% and the POOR 33%. Conclusion Applying an innovative stepwise multilevel analysis, we observed that, in Malmö, the neighbourhood context per se had a negligible

  2. The Efficacy of Written Corrective Feedback in Improving L2 Written Accuracy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Kang, EunYoung; Han, Zhaohong

    2015-01-01

    Written corrective feedback has been subject to increasing attention in recent years, in part because of the conceptual controversy surrounding it and in part because of its ubiquitous practice. This study takes a meta-analytic approach to synthesizing extant empirical research, including 21 primary studies. Guiding the analysis are two questions:…

  3. [Accuracy of procalcitonin for diagnosis of sepsis in adults: a Meta-analysis].

    PubMed

    Chengfen, Yin; Tong, Li; Xinjing, Gao; Zhibo, Li; Lei, Xu

    2015-09-01

    To assess the clinical value of procalcitonin (PCT ) in the diagnosis of sepsis in adults. An extensive search for related literature from the Wanfang data, CNKI, VIP, Medline/PubMed, Embase/OvidSP and the Cochrane Library up to December 2014 was performed. The articles, including prospective observational studies or randomized controlled trials, regarding PCT for the diagnosing of sepsis were enrolled. Only patients older than 18 years were included. Patients with sepsis, severe sepsis, or septic shock served as the experimental group, and those with a systemic inflammatory response syndrome (SIRS) of non-infectious origin as control group. The language of literature included was English or Chinese. The quality of the studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Heterogeneity, pooled diagnostic odds ratio (DOR), pooled sensitivity, pooled specificity, pooled positive likelihood ratio, pooled negative likelihood ratio, the area under the summary receiver operating characteristic curve (SROC) and subgroup analysis were analyzed with the software of Metadisc 1.4. A total of 6 385 published reports were collected, and among them 24 met the inclusion criteria, including a total. of 3 107 patients. The studies showed substantial heterogeneity (I2 = 69.4%), and random effect model was used for Meta analysis, showing that the pooled DOR was 10.37 [95% confidence interval (95%CI) = 7.10-15.17]. No evidence of a threshold effect was found (Spearman correlation coefficient = 0.27, calculated by logarithm of sensitivity and logarithm of 1-specificity, P = 0.20). The DOR values of pooled and each study were not distributed along the same line in forest plots, and Cochran-Q = 78.33, P = 0.000 0, showing that there was heterogeneity in result from non threshold effect. Except for partial heterogeneity caused by non threshold effect, the result of Meta regression analysis including PCT detection method, categories of disease

  4. Diagnostic needle arthroscopy and the economics of improved diagnostic accuracy: a cost analysis.

    PubMed

    Voigt, Jeffrey D; Mosier, Michael; Huber, Bryan

    2014-10-01

    Hundreds of thousands of surgical arthroscopy procedures are performed annually in the United States (US) based on MRI findings. There are situations where these MRI findings are equivocal or indeterminate and because of this clinicians commonly perform the arthroscopy in order not to miss pathology. Recently, a less invasive needle arthroscopy system has been introduced that is commonly performed in the physician office setting and that may help improve the accuracy of diagnostic findings. This in turn may prevent unnecessary follow-on arthroscopy procedures from being performed. The purpose of this analysis is to determine whether the in-office diagnostic needle arthroscopy system can provide cost savings by reducing unnecessary follow on arthroscopy procedures. Data obtained from a recent trial and from a systematic review were used in comparing the accuracy of MRI and VisionScope needle arthroscopy (VSI) with standard arthroscopy (gold standard). The resultant false positive and false negative findings were then used to evaluate the costs of follow-on procedures. These differences were then modeled for the US patient population diagnosed and treated for meniscal knee pathology (most common disorder) to determine if a technology such as VSI could save the US healthcare system money. Data on surgical arthroscopy procedures in the US for meniscal knee pathology were used (calendar year [CY] 2010). The costs of performing diagnostic and surgical arthroscopy procedures (using CY 2013 Medicare reimbursement amounts), costs associated with false negative findings, and the costs for treating associated complications arising from diagnostic and therapeutic arthroscopy procedures were assessed. In patients presenting with medial meniscal pathology (International Classification of Diseases, 9th edition, Clinical Modification [ICD9CM] diagnosis 836.0), VSI in place of MRI (standard of care) resulted in a net cost savings to the US system of US$115-US$177 million (CY 2013

  5. Characteristics of Marine Gravity Anomaly Reference Maps and Accuracy Analysis of Gravity Matching-Aided Navigation

    PubMed Central

    Wang, Hubiao; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong

    2017-01-01

    The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°–145° E, 0°–40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China’s Western Pacific area is ~1.0–4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy. PMID:28796158

  6. ASA Classification Pre-Endoscopic Procedures: A Retrospective Analysis on the Accuracy of Gastroenterologists.

    PubMed

    Theivanayagam, Shoba; Lopez, Kristi T; Matteson-Kome, Michelle L; Bechtold, Matthew L; Asombang, Akwi W

    2017-02-01

    Before an endoscopic procedure, an evaluation to assess the risk of sedation is performed by the gastroenterologist. To risk stratify based on medical problems, the American Society of Anesthesiologists (ASA) classification scores are used routinely in the preprocedure evaluation. The objective of our study was to evaluate among physicians the ASA score accuracy pre-endoscopic procedures. At a single tertiary-care center an institutional review board-approved retrospective study was performed. Upper endoscopies performed from May 2012 through August 2013 were reviewed; data were collected and recorded. Statistical analysis was performed using descriptive statistics and linear weighted kappa analysis for agreement (≤0.20 is poor agreement, 0.21-0.40 is fair, 0.41-0.60 is moderate, 0.61-0.80 is good, and 0.81-1.00 is very good). The mean ASA scores by the gastroenterologist compared with the anesthesiologist were 2.28 ± 0.56 and 2.78 ± 0.60, respectively, with only fair agreement (weighted kappa index 0.223, 95% confidence interval [CI] 0.113-0.333; 48% agreement). The mean ASA scores for gastroenterologists compared with other gastroenterologists were 2.26 ± 0.5 and 2.26 ± 0.44, respectively, with poor agreement (weighted kappa index 0.200, 95% CI 0.108-0.389; 68% agreement). Agreement on ASA scores was only moderate between the gastroenterologist and himself or herself (weighted kappa index 0.464, 95% CI 0.183-0.745; 75% agreement). Gastroenterologists performing preprocedure assessments of ASA scores have fair agreement with anesthesiologists, poor agreement with other gastroenterologists, and only moderate agreement with themselves. Given this level of inaccuracy, it appears that the ASA score pre-endoscopy is of limited significance.

  7. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  8. On the automaticity and flexibility of covert attention: a speed-accuracy trade-off analysis.

    PubMed

    Giordano, Anna Marie; McElree, Brian; Carrasco, Marisa

    2009-03-31

    Exogenous covert attention improves discriminability and accelerates the rate of visual information processing (M. Carrasco & B. McElree, 2001). Here we investigated and compared the effects of both endogenous (sustained) and exogenous (transient) covert attention. Specifically, we directed attention via spatial cues and evaluated the automaticity and flexibility of exogenous and endogenous attention by manipulating cue validity in conjunction with a response-signal speed-accuracy trade-off (SAT) procedure, which provides conjoint measures of discriminability and information accrual. To investigate whether discriminability and rate of information processing differ as a function of cue validity (chance to 100%), we compared how both types of attention affect performance while keeping experimental conditions constant. With endogenous attention, both the observed benefits (valid-cue) and the costs (invalid-cue) increased with cue validity. However, with exogenous attention, the benefits and costs in both discriminability and processing speed were similar across cue validity conditions. These results provide compelling time-course evidence that whereas endogenous attention can be flexibly allocated according to cue validity, exogenous attention is automatic and unaffected by cue validity.

  9. The improvement of OPC accuracy and stability by the model parameters' analysis and optimization

    NASA Astrophysics Data System (ADS)

    Chung, No-Young; Choi, Woon-Hyuk; Lee, Sung-Ho; Kim, Sung-Il; Lee, Sun-Yong

    2007-10-01

    The OPC model is very critical in the sub 45nm device because the Critical Dimension Uniformity (CDU) is so tight to meet the device performance and the process window latitude for the production level. The OPC model is generally composed of an optical model and a resist model. Each of them has physical terms to be calculated without any wafer data and empirical terms to be fitted with real wafer data to make the optical modeling and the resist modeling. Empirical terms are usually related to the OPC accuracy, but are likely to be overestimated with the wafer data and so those terms can deteriorate OPC stability in case of being overestimated by a small cost function. Several physical terms have been used with ideal value in the optical property and even weren't be considered because those parameters didn't give a critical impact on the OPC accuracy, but these parameters become necessary to be applied to the OPC modeling at the low k1 process. Currently, real optic parameter instead of ideal optical parameter like the laser bandwidth, source map, pupil polarization including the phase and intensity difference start to be measured and those real measured value are used for the OPC modeling. These measured values can improve the model accuracy and stability. In the other hand these parameters can make the OPC model to overcorrect the process proximity errors without careful handling. The laser bandwidth, source map, pupil polarization, and focus centering for the optical modeling are analyzed and the sample data weight scheme and resist model terms are investigated, too. The image blurring by actual laser bandwidth in the exposure system is modeled and the modeling result shows that the extraction of the 2D patterns is necessary to get a reasonable result due to the 2D patterns' measurement noise in the SEM. The source map data from the exposure machine shows lots of horizontal and vertical intensity difference and this phenomenon must come from the measurement noise

  10. The accuracy of emergency weight estimation systems in children-a systematic review and meta-analysis.

    PubMed

    Wells, Mike; Goldstein, Lara Nicole; Bentley, Alison

    2017-09-21

    The safe and effective administration of fluids and medications during the management of medical emergencies in children depends on an appropriately determined dose, based on body weight. Weight can often not be measured in these circumstances and a convenient, quick and accurate method of weight estimation is required. Most methods in current use are not accurate enough, but the newer length-based, habitus-modified (two-dimensional) systems have shown significantly higher accuracy. This meta-analysis evaluated the accuracy of weight estimation systems in children. Articles were screened for inclusion into two study arms: to determine an appropriate accuracy target for weight estimation systems; and to evaluate the accuracy of existing systems using standard meta-analysis techniques. There was no evidence found to support any specific goal of accuracy. Based on the findings of this study, a proposed minimum accuracy of 70% of estimations within 10% of actual weight (PW10 > 70%), and 95% within 20% of actual weight (PW20 > 95%) should be demonstrated by a weight estimation system before being considered to be accurate. In the meta-analysis, the two-dimensional systems performed best. The Mercy method (PW10 70.9%, PW20 95.3%), the PAWPER tape (PW10 78.0%, PW20 96.6%) and parental estimates (PW10 69.8%, PW20 87.1%) were the most accurate systems investigated, with the Broselow tape (PW10 55.6%, PW20 81.2%) achieving a lesser accuracy. Age-based estimates achieved a very low accuracy. Age- and length-based systems had a substantial difference in over- and underestimation of weight in high-income and low- and middle-income populations. A benchmark for minimum accuracy is recommended for weight estimation studies and a PW10 > 70% with PW20 > 95% is suggested. The Mercy method, the PAWPER tape and parental estimates were the most accurate weight estimation systems followed by length-based and age-based systems. The use of age-based formulas should be abandoned

  11. Automated high-accuracy mutation screening with the WAVE nucleic acid fragment analysis system

    NASA Astrophysics Data System (ADS)

    Hecker, Karl H.

    2002-06-01

    The analysis of DNA fragments by ion-pair reversed-phase high-performance liquid chromatography on an alkylated, nonporous poly(styrene-divinylbenzene) matrix (DNA Cartridge) using the WAVE Nucleic Acid Fragment Analysis System is a powerful and versatile tool for DNA analysis. Resolution of DNA fragments is based on two principles, size-dependent retention of double-stranded (ds) DNA and differential retention of ds vs. single-stranded (ss) DNA. Temperature Modulated Heteroduplex Analysis utilizes both principles of separation to detect single nucleotide polymorphisms (SNP) and short insertions/deletions. At a given temperature the difference in the melting between homo- and heteroduplexes is revealed by differences in retention times. The temperature at which differential melting occurs is sequence dependent and is predicated accurately using either WAVEMAKER or WAVE Navigator software, which use a modified Fixman-Friere algorithm. Detection of known and unknown sequence variations can be performed on DNA fragments of up to 1,000 base pairs with high sensitivity and specificity. The use of fluorescent labels is compatible with the technology and increases sensitivity. Retention times are increased and resolution is not affected. Fluorescent labeling significantly increases sensitivity.

  12. The influence of uncertainties of attitude sensors on attitude determination accuracy by linear covariance analysis

    NASA Astrophysics Data System (ADS)

    Blomqvist, J.; Fullmer, R.

    2010-04-01

    The idea that Linear Covariance techniques can be used to predict the accuracy of attitude determination systems and assist in their design is investigated. By using the sensor's estimated parameter accuracy, one could calculate the total standard deviation of the attitude determination that is resulting from these uncertainties by simple Root- Sum-Square of the attitude standard deviation resulting from the respective uncertainties. Generalized Matrix Laboratory (MATLAB) M-functions using this technique are written in order to provide a tool for estimating the attitude determination accuracy of a small spacecraft and to identify major contributions to the attitude determination uncertainty. This tool is applied to a satellite dynamics truth model developed in order to quantify the effects of sensor uncertainties on this particular spacecraft's attitude determination accuracy. The result of this study determines the standard deviation of the attitude determination as a function of the sensor uncertainties.

  13. SU-E-J-37: Feasibility of Utilizing Carbon Fiducials to Increase Localization Accuracy of Lumpectomy Cavity for Partial Breast Irradiation

    SciTech Connect

    Zhang, Y; Hieken, T; Mutter, R; Park, S; Yan, E; Brinkmann, D; Pafundi, D

    2015-06-15

    Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers, followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios

  14. Accuracy Analysis of Precise Point Positioning of Compass Navigation System Applied to Crustal Motion Monitoring

    NASA Astrophysics Data System (ADS)

    Wang, Yuebing

    2017-04-01

    Based on the observation data of Compass/GPSobserved at five stations, time span from July 1, 2014 to June 30, 2016. UsingPPP positioning model of the PANDA software developed by Wuhan University,Analyzedthe positioning accuracy of single system and Compass/GPS integrated resolving, and discussed the capability of Compass navigation system in crustal motion monitoring. The results showed that the positioning accuracy in the east-west directionof the Compass navigation system is lower than the north-south direction (the positioning accuracy de 3 times RMS), in general, the positioning accuracyin the horizontal direction is about 1 2cm and the vertical direction is about 5 6cm. The GPS positioning accuracy in the horizontal direction is better than 1cm and the vertical direction is about 1 2cm. The accuracy of Compass/GPS integrated resolving is quite to GPS. It is worth mentioning that although Compass navigation system precision point positioning accuracy is lower than GPS, two sets of velocity fields obtained by using the Nikolaidis (2002) model to analyze the Compass and GPS time series results respectively, the results showed that the maximum difference of the two sets of velocity field in horizontal directions is 1.8mm/a. The Compass navigation system can now be used to monitor the crustal movement of the large deformation area, based on the velocity field in horizontal direction.

  15. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  16. Systematic analysis on the achievable accuracy of PT-PET through automated evaluation techniques.

    PubMed

    Helmbrecht, Stephan; Kuess, Peter; Birkfellner, Wolfgang; Enghardt, Wolfgang; Stützer, Kristin; Georg, Dietmar; Fiedler, Fine

    2015-06-01

    Particle Therapy Positron Emission Tomography (PT-PET) is currently the only clinically applied method for in vivo verification of ion-beam radiotherapy during or close in time to the treatment. Since a direct deduction of the delivered dose from the measured activity is not feasible, images are compared to a reference distribution. The achievable accuracy of two image analysis approaches was investigated by means of reproducible phantom benchmark tests. This is an objective method that excludes patient related factors of influence. Two types of phantoms were designed to produce well defined deviations in the activity distributions. Pure range differences were simulated using the first phantom type while the other emulated cavity structures. The phantoms were irradiated with (12)C-ions. PT-PET measurements were performed by means of a camera system installed at the beamline. Different measurement time scenarios were investigated, assuming a PET scanner directly at the irradiation site or placed within the treatment room. The images were analyzed by means of the Pearson Correlation Coefficient (PCC) and a range calculation algorithm combined with a dedicated cavity filling detection method. Range differences could be measured with an error of less than 2 mm. The range comparison algorithm yielded slightly better results than the PCC method. The filling of a cavity structure could be safely detected if its inner diameter was at least 5 mm. Both approaches evaluate the PT-PET data in an objective way and deliver promising results for in-beam and in-room PET for clinical realistic dose rates. Copyright © 2014. Published by Elsevier GmbH.

  17. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    ERIC Educational Resources Information Center

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  18. The Accuracy of Recidivism Risk Assessments for Sexual Offenders: A Meta-Analysis of 118 Prediction Studies

    ERIC Educational Resources Information Center

    Hanson, R. Karl; Morton-Bourgon, Kelly E.

    2009-01-01

    This review compared the accuracy of various approaches to the prediction of recidivism among sexual offenders. On the basis of a meta-analysis of 536 findings drawn from 118 distinct samples (45,398 sexual offenders, 16 countries), empirically derived actuarial measures were more accurate than unstructured professional judgment for all outcomes…

  19. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    ERIC Educational Resources Information Center

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  20. Does navigated transcranial stimulation increase the accuracy of tractography? A prospective clinical trial based on intraoperative motor evoked potential monitoring during deep brain stimulation.

    PubMed

    Forster, Marie-Therese; Hoecker, Alexander Claudius; Kang, Jun-Suk; Quick, Johanna; Seifert, Volker; Hattingen, Elke; Hilker, Rüdiger; Weise, Lutz Martin

    2015-06-01

    Tractography based on diffusion tensor imaging has become a popular tool for delineating white matter tracts for neurosurgical procedures. To explore whether navigated transcranial magnetic stimulation (nTMS) might increase the accuracy of fiber tracking. Tractography was performed according to both anatomic delineation of the motor cortex (n = 14) and nTMS results (n = 9). After implantation of the definitive electrode, stimulation via the electrode was performed, defining a stimulation threshold for eliciting motor evoked potentials recorded during deep brain stimulation surgery. Others have shown that of arm and leg muscles. This threshold was correlated with the shortest distance between the active electrode contact and both fiber tracks. Results were evaluated by correlation to motor evoked potential monitoring during deep brain stimulation, a surgical procedure causing hardly any brain shift. Distances to fiber tracks clearly correlated with motor evoked potential thresholds. Tracks based on nTMS had a higher predictive value than tracks based on anatomic motor cortex definition (P < .001 and P = .005, respectively). However, target site, hemisphere, and active electrode contact did not influence this correlation. The implementation of tractography based on nTMS increases the accuracy of fiber tracking. Moreover, this combination of methods has the potential to become a supplemental tool for guiding electrode implantation.

  1. An investigation of the role of image properties in influencing the accuracy of remote sensing change detection analysis

    NASA Astrophysics Data System (ADS)

    Almutairi, Abdullah

    The purpose of this study was to examine the influence of image properties on the accuracy of remote sensing change detection methods. Spectral class separability, radiometric normalization and image band correlation were evaluated through experiments with simulated data. The experimental results were then evaluated as a tool for predicting the relative accuracy of change detection results obtained from Landsat TM satellite image pairs of three U.S. cities: Las Vegas, Nevada; Phoenix, Arizona; and Atlanta, Georgia. The change detection methods used were post-classification comparison, direct classification, image differencing, principal component analysis, and change vector analysis. Results of the simulated experiments confirmed that the relative accuracy of the change detection methods varied with changes in image properties. For the class separability experiments, post-classification comparison, direct classification, image differencing, and PCA with a large number of the principal components, were found generally to have higher accuracies than CVA and PCA with a small number of the principal components. For classes with very good separability, image differencing is an excellent method; for classes with poor spectral separability, image differencing was found to have the lowest accuracy. The influence of the error in radiometric normalization on the accuracy of change detection techniques varied greatly with different degrees of class separability. This can be seen particularly well with image differencing, which showed the highest sensitivity to large changes in radiometric error and very poor class separability. Image differencing and PCA were also found to be more sensitive to band correlation. The classification of the real change detection data from the Landsat pairs showed complex and varying patterns, depending on whether complete (mapping all unchanged and changed transitions) or partial (grouping all unchanged transitions into a single class) change

  2. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms.

    PubMed

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios

    2016-01-01

    Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.

  3. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms

    PubMed Central

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias

    2016-01-01

    Background and Aim Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. Methods We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Results Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. Conclusion The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts. PMID:27936091

  4. The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis.

    PubMed

    Gardumi, Anna; Ivanov, Dimo; Hausfeld, Lars; Valente, Giancarlo; Formisano, Elia; Uludağ, Kâmil

    2016-05-15

    Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Multinational assessment of accuracy of equations for predicting risk of kidney failure: a meta-analysis

    PubMed Central

    Tangri, Navdeep; Grams, Morgan E.; Levey, Andrew S.; Coresh, Josef; Appel, Lawrence; Astor, Brad C.; Chodick, Gabriel; Collins, Allan J.; Djurdjev, Ognjenka; Elley, C. Raina; Evans, Marie; Garg, Amit X.; Hallan, Stein I.; Inker, Lesley; Ito, Sadayoshi; Jee, Sun Ha; Kovesdy, Csaba P.; Kronenberg, Florian; Lambers Heerspink, Hiddo J.; Marks, Angharad; Nadkarni, Girish N.; Navaneethan, Sankar D.; Nelson, Robert G.; Titze, Stephanie; Sarnak, Mark J.; Stengel, Benedicte; Woodward, Mark; Iseki, Kunitoshi

    2016-01-01

    Importance Identifying patients at risk of chronic kidney disease (CKD) progression may facilitate more optimal nephrology care. Kidney failure risk equations (KFREs) were previously developed and validated in two Canadian cohorts. Validation in other regions and in CKD populations not under the care of a nephrologist is needed. Objective To evaluate the accuracy of the KFREs across different geographic regions and patient populations through individual-participant data meta-analysis. Data Sources Thirty-one cohorts, including 721,357 participants with CKD Stages 3–5 in over 30 countries spanning 4 continents, were studied. These cohorts collected data from 1982 through 2014. Study Selection Cohorts participating in the CKD Prognosis Consortium with data on end-stage renal disease. Data Extraction and Synthesis Data were obtained and statistical analyses were performed between July 2012 and June 2015. Using the risk factors from the original KFREs, cohort-specific hazard ratios were estimated, and combined in meta-analysis to form new “pooled” KFREs. Original and pooled equation performance was compared, and the need for regional calibration factors was assessed. Main Outcome and Measure Kidney failure (treatment by dialysis or kidney transplantation). Results During a median follow-up of 4 years, 23,829 cases of kidney failure were observed. The original KFREs achieved excellent discrimination (ability to differentiate those who developed kidney failure from those who did not) across all cohorts (overall C statistic, 0.90 (95% CI 0.89–0.92) at 2 years and 0.88 (95% CI 0.86–0.90) at 5 years); discrimination in subgroups by age, race, and diabetes status was similar. There was no improvement with the pooled equations. Calibration (the difference between observed and predicted risk) was adequate in North American cohorts, but the original KFREs overestimated risk in some non-North American cohorts. Addition of a calibration factor that lowered the baseline

  6. Test Form Accuracy.

    ERIC Educational Resources Information Center

    Wise, Lauress

    As high-stakes use of tests increases, it becomes vital that test developers and test users communicate clearly about the accuracy and limitations of the scores generated by a test after it is assembled and used. A procedure is described for portraying the accuracy of test scores. It can be used in setting accuracy targets during form construction…

  7. Limitations and strategies to improve measurement accuracy in differential pulse-width pair Brillouin optical time-domain analysis sensing.

    PubMed

    Minardo, Aldo; Bernini, Romeo; Zeni, Luigi

    2013-05-01

    In this work, we analyze the effects of Brillouin gain and Brillouin frequency drifts on the accuracy of the differential pulse-width pair Brillouin optical time-domain analysis (DPP-BOTDA). In particular, we demonstrate numerically that the differential gain is highly sensitive to variations in the Brillouin gain and/or Brillouin shift occurring during the acquisition process, especially when operating with a small pulse pair duration difference. We also propose and demonstrate experimentally a method to compensate for these drifts and consequently improve measurement accuracy.

  8. Analysis of the Ship Ops Model’s Accuracy in Predicting U.S. Naval Ship Operating Cost

    DTIC Science & Technology

    2003-06-01

    AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this report are those of the author(s) and do not reflect the official policy or...Dean Graduate School of Business and Public Policy iv THIS PAGE INTENTIONALLY LEFT BLANK v ANALYSIS OF THE SHIP OPS MODEL’S ACCURACY...of Model Accuracy using backcast : 1997-2002 Year SF SU SR SO Total 1997 $24,654 $4,315 $12,748 $6,626 $48,343 1998 $29,890 $5,853 $15,300 $9,046

  9. Analysis: The Accuracy and Efficacy of the Dexcom G4 Platinum Continuous Glucose Monitoring System.

    PubMed

    van Beers, Cornelis A J; DeVries, J H

    2015-04-27

    In this issue of Journal of Diabetes Science and Technology, Nakamura and Balo report on accuracy and efficacy of the Dexcom G4 Platinum Continuous Glucose Monitoring System. The authors demonstrate good overall performance of this real-time continuous glucose monitoring (RT-CGM) system, although accuracy data of the next generation RT-CGM system, the G4AP, is already available. Also, now that MARDs seem to move to single-digit numbers, the question comes up how low we need to go with accuracy. Results of the study also showed a reduction in time spent in hypoglycemia, although the clinical relevance should be questioned. To date, few trials have demonstrated a reduction of severe hypoglycemia. Conventional RT-CGM, without threshold suspension or closing the loop, might be insufficient in preventing severe hypoglycemia.

  10. Application of Spectral Accuracy to Improve the Identification of Organic Compounds in Environmental Analysis.

    PubMed

    Eysseric, Emmanuel; Barry, Killian; Beaudry, Francis; Houde, Magali; Gagnon, Christian; Segura, Pedro A

    2017-09-19

    Correct identification of a chemical substance in environmental samples based only on accurate mass measurements can be difficult especially for molecules >300 Da. Here is presented the application of spectral accuracy, a tool for the comparison of isotope patterns toward molecular formula generation, as a complementary technique to assist in the identification process of organic micropollutants and their transformation products in surface water. A set of nine common contaminants (five antibiotics, an herbicide, a beta-blocker, an antidepressant, and an antineoplastic) frequently found in surface water were spiked in methanol and surface water extracts at two different concentrations (80 and 300 μg L(-1)). They were then injected into three different mass analyzers (triple quadrupole, quadrupole-time-of-fight, and quadrupole-orbitrap) to study the impact of matrix composition, analyte concentration, and mass resolution on the correct identification of molecular formulas using spectral accuracy. High spectral accuracy and ranking of the correct molecular formula were in many cases compound-specific due principally to conditions affecting signal intensity such as matrix effects and concentration. However, in general, results showed that higher concentrations and higher resolutions favored ranking the correct formula in the top 10. Using spectral accuracy and mass accuracy it was possible to reduce the number of possible molecular formulas for organic compounds of relative high molecular mass (e.g., between 400 and 900 Da) to less than 10 and in some cases, it was possible to unambiguously assign one specific molecular formula to an experimental isotopic pattern. This study confirmed that spectral accuracy can be used as a complementary diagnostic technique to improve confidence levels for the identification of organic contaminants under environmental conditions.

  11. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  12. Effect of transportation and storage using sorbent tubes of exhaled breath samples on diagnostic accuracy of electronic nose analysis.

    PubMed

    van der Schee, M P; Fens, N; Brinkman, P; Bos, L D J; Angelo, M D; Nijsen, T M E; Raabe, R; Knobel, H H; Vink, T J; Sterk, P J

    2013-03-01

    Many (multi-centre) breath-analysis studies require transport and storage of samples. We aimed to test the effect of transportation and storage using sorbent tubes of exhaled breath samples for diagnostic accuracy of eNose and GC-MS analysis. As a reference standard for diagnostic accuracy, breath samples of asthmatic patients and healthy controls were analysed by three eNose devices. Samples were analysed by GC-MS and eNose after 1, 7 and 14 days of transportation and storage using sorbent tubes. The diagnostic accuracy for eNose and GC-MS after storage was compared to the reference standard. As a validation, the stability was assessed of 15 compounds known to be related to asthma, abundant in breath or related to sampling and analysis. The reference test discriminated asthma and healthy controls with a median AUC (range) of 0.77 (0.72-0.76). Similar accuracies were achieved at t1 (AUC eNose 0.78; GC-MS 0.84), t7 (AUC eNose 0.76; GC-MS 0.79) and t14 (AUC eNose 0.83; GC-MS 0.84). The GC-MS analysis of compounds showed an adequate stability for all 15 compounds during the 14 day period. Short-term transportation and storage using sorbent tubes of breath samples does not influence the diagnostic accuracy for discrimination between asthma and health by eNose and GC-MS.

  13. Correlation Analysis Combined with a Floating Reference Measurement to Improve the Prediction Accuracy of Glucose in Scattering Media.

    PubMed

    Min, Xiaolin; Liu, Rong; Fu, Bo; Xu, Kexin

    2017-09-01

    Noninvasive sensing of blood glucose based on near-infrared (NIR) spectroscopy is a research hotspot in the biomedical field. However, its accuracy is severely limited by the weak specific signal of glucose and the strong background variations caused by other constituents in the blood, the measuring instrument, and the environment. In this paper, special source-detector distances, defined as the floating reference position, are used to conduct relative measurements and correct for background variations. These floating reference positions are chosen so that the diffuse reflectance is not sensitive to the change in glucose concentration due to the combined effects of absorption and scattering. Nine 10% intralipid samples with glucose concentrations in the range of 1000-5000 mg dL(-1) at an interval of 500 mg dL(-1) were prepared. Using a custom-built, continuously moving, spatially resolving, double-fiber measurement system with a superluminescent diode (SLD) as the light source, the diffuse reflectance of intralipid samples containing glucose under different source-detector distances (0.2-5 mm, with intervals of 0.2 mm) were collected. Then, a correlation analysis between the spectra and the glucose concentration was carried out to determine the floating reference position and the optimal measuring position. The signal in the floating reference position was used to correct the background variation because it contains the same systematic drift and interference as the signal in the optimal measuring position. The results showed that the correlation between the diffuse reflectance and the glucose concentration was increased significantly compared with traditional correction by subtracting the nearest spectrum of pure 10% intralipid solution. The correlation between the diffuse reflectance and the concentration of glucose is significantly increased, which indicated that the combination of the correlation analysis and a floating reference is able to eliminate

  14. Diagnostic accuracy of clinical characteristics for identifying CT abnormality after minor brain injury: a systematic review and meta-analysis.

    PubMed

    Pandor, Abdullah; Harnan, Susan; Goodacre, Steve; Pickering, Alastair; Fitzgerald, Patrick; Rees, Angie

    2012-03-20

    Clinical features can be used to identify which patients with minor brain injury need CT scanning. A systematic review and meta-analysis was undertaken to estimate the value of these characteristics for diagnosing intracranial injury (including the need for neurosurgery) in adults, children, and infants. Potentially relevant studies were identified through electronic searches of several key databases, including MEDLINE, from inception to March 2010. Cohort studies of patients with minor brain injury (Glasgow Coma Score [GCS], 13-15) were selected if they reported data on the diagnostic accuracy of individual clinical characteristics for intracranial or neurosurgical injury. Where applicable, meta-analysis was used to estimate pooled sensitivity, specificity and likelihood ratios. Data were extracted from 71 studies (with cohort sizes ranging from 39 to 31,694 patients). Depressed or basal skull fracture were the most useful clinical characteristics for the prediction of intracranial injury in both adults and children (positive likelihood ratio [PLR], >10). Other useful characteristics included focal neurological deficit, post-traumatic seizure (PLR >5), persistent vomiting, and coagulopathy (PLR 2 to 5). Characteristics that had limited diagnostic value included loss of consciousness and headache in adults and scalp hematoma and scalp laceration in children. Limited studies were undertaken in children and only a few studies reported data for neurosurgical injuries. In conclusion, this review identifies clinical characteristics that indicate increased risk of intracranial injury and the need for CT scanning. Other characteristics, such as headache in adults and scalp laceration of hematoma in children, do not reliably indicate increased risk.

  15. Accuracy and postoperative assessment of pedicle screw placement during scoliosis surgery with computer-assisted navigation: a meta-analysis.

    PubMed

    Tian, Wei; Zeng, Cheng; An, Yan; Wang, Chao; Liu, Yajun; Li, Jianing

    2017-03-01

    Accurate insertion of pedicle screws in scoliosis patients is a challenge for surgeons. Computer-assisted navigation techniques might help improve the accuracy of screw placement, thereby avoiding complications. Thus, the objective of this present work is to compare the accuracy and postoperative assessment of pedicle screw placement in scoliosis patients using a computer-assisted navigation technique and using a conventional free-hand method. A search of the PubMed, Cochrane, and Web of Science databases was executed. In vivo comparative studies that assessed the accuracy and postoperative evaluation of pedicle screw placement in scoliosis patients with or without navigation techniques were chosen and analyzed. The accuracy of pedicle screw insertion was significantly increased when using the navigation system, although the average operative time and correction rate was not significantly different from that with non-navigated surgery. The navigation technique improves the accuracy of pedicle screw placement during scoliosis surgery without prolonging the operative time or decreasing the deformity correction effect. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Accuracy of an autocalibrated pulse contour analysis in cardiac surgery patients: a bi-center clinical trial.

    PubMed

    Broch, Ole; Carbonell, Jose; Ferrando, Carlos; Metzner, Malte; Carstens, Arne; Albrecht, Martin; Gruenewald, Matthias; Höcker, Jan; Soro, Marina; Steinfath, Markus; Renner, Jochen; Bein, Berthold

    2015-11-26

    Less-invasive and easy to install monitoring systems for continuous estimation of cardiac index (CI) have gained increasing interest, especially in cardiac surgery patients who often exhibit abrupt haemodynamic changes. The aim of the present study was to compare the accuracy of CI by a new semi-invasive monitoring system with transpulmonary thermodilution before and after cardiopulmonary bypass (CPB). Sixty-five patients (41 Germany, 24 Spain) scheduled for elective coronary surgery were studied before and after CPB, respectively. Measurements included CI obtained by transpulmonary thermodilution (CITPTD) and autocalibrated semi-invasive pulse contour analysis (CIPFX). Percentage changes of CI were also calculated. There was only a poor correlation between CITPTD and CIPFX both before (r (2) = 0.34, p < 0.0001) and after (r (2) = 0.31, p < 0.0001) CPB, with a percentage error (PE) of 62 and 49 %, respectively. Four quadrant plots revealed a concordance rate over 90 % indicating an acceptable correlation of trends between CITPTD and CIPFX before (concordance: 93 %) and after (concordance: 94 %) CPB. In contrast, polar plot analysis showed poor trending before and an acceptable trending ability of changes in CI after CPB. Semi-invasive CI by autocalibrated pulse contour analysis showed a poor ability to estimate CI compared with transpulmonary thermodilution. Furthermore, the new semi-invasive device revealed an acceptable trending ability for haemodynamic changes only after CPB. ClinicalTrials.gov: NCT02312505 Date: 12.03.2012.

  17. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  18. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  19. The Push for More Challenging Texts: An Analysis of Early Readers' Rate, Accuracy, and Comprehension

    ERIC Educational Resources Information Center

    Amendum, Steven J.; Conradi, Kristin; Liebfreund, Meghan D.

    2016-01-01

    The purpose of the study was to examine the relationship between the challenge level of text and early readers' reading comprehension. This relationship was also examined with consideration to students' word recognition accuracy and reading rate. Participants included 636 students, in Grades 1-3, in a southeastern state. Results suggest that…

  20. An analysis of the accuracy of a parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Baram, Y.

    1974-01-01

    The numerical operations involved in a currently used optimization technique are discussed and analyzed with special attention to the numerical accuracy. Alternative methods for deriving linear system transfer functions, finding the relationships between the transfer function coefficients and the design parameters, and solving a matrix equation are presented for more accurate and cost effective solutions.

  1. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  2. An Information-Processing Analysis of Children's Accuracy in Predicting the Appearance of Rotated Stimuli.

    ERIC Educational Resources Information Center

    Rosser, Rosemary A.; And Others

    1984-01-01

    The ability of 40 children four and five years of age to discriminate reflections and rotations of visual stimuli was examined in a kinetic imagery task. Results revealed that prediction accuracy was associated with the existence of orientation markers on the stimuli, as well as age, sex, type of discrimination, and several interactions among the…

  3. Theoretical study of precision and accuracy of strain analysis by nano-beam electron diffraction.

    PubMed

    Mahr, Christoph; Müller-Caspary, Knut; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Krause, Florian F; Zillmann, Dennis; Rosenauer, Andreas

    2015-11-01

    Measurement of lattice strain is important to characterize semiconductor nanostructures. As strain has large influence on the electronic band structure, methods for the measurement of strain with high precision, accuracy and spatial resolution in a large field of view are mandatory. In this paper we present a theoretical study of precision and accuracy of measurement of strain by convergent nano-beam electron diffraction. It is found that the accuracy of the evaluation suffers from halos in the diffraction pattern caused by a variation of strain within the area covered by the focussed electron beam. This effect, which is expected to be strong at sharp interfaces between materials with different lattice plane distances, will be discussed for convergent-beam electron diffraction patterns using a conventional probe and for patterns formed by a precessing electron beam. Furthermore, we discuss approaches to optimize the accuracy of strain measured at interfaces. The study is based on the evaluation of diffraction patterns simulated for different realistic structures that have been investigated experimentally in former publications. These simulations account for thermal diffuse scattering using the frozen-lattice approach and the modulation-transfer function of the image-recording system. The influence of Poisson noise is also investigated.

  4. A Diagnostic Accuracy Meta-analysis of CT and MRI for the Evaluation of Small Bowel Crohn Disease.

    PubMed

    Liu, Wenhong; Liu, Jincai; Xiao, Wenlian; Luo, Guanghua

    2017-10-01

    This study aimed to evaluate the diagnostic accuracy of magnetic resonance imaging (MRI) and computed tomography (CT) in assessing small bowel (SB) Crohn disease (CD). We systematically searched PubMed, Elsevier, ScienceDirect, Karger, Web of Science, Wiley Online Library, and Springer for studies in which CT or MRI were evaluated to assess SB CD. Bivariate random effect meta-analytic methods were used to estimate pooled sensitivity, specificity, and receiver operating characteristic curves. Diagnostic odds ratios (DORs) in a per-patient-based analysis were estimated. The area under the receiver operating characteristic curve was also calculated to measure the diagnostic accuracy. Twenty-one studies involving 913 patients were included in this meta-analysis. There was no significant difference observed between modalities. The diagnostic performances (lnDOR) for CT and MRI also showed no significant difference. Subgroup analysis was performed for MR imaging (MR enteroclysis, MR enterography, and CT enterography). The diagnostic performances (lnDOR) for MR enteroclysis, MR enterography, and CT enterography did not show a significant difference among them. No significant difference was found between these techniques. Deeks funnel plot asymmetry test for publication bias showed that no significant publication bias was observed in this analysis. This meta-analysis suggests that both MRI and CT have high diagnostic accuracy in detecting SB CD. MRI has the potential to be the first-line radiation-free modality for SB CD imaging. Copyright © 2017. Published by Elsevier Inc.

  5. Taking time to feel our body: Steady increases in heartbeat perception accuracy and decreases in alexithymia over 9 months of contemplative mental training.

    PubMed

    Bornemann, Boris; Singer, Tania

    2017-03-01

    The ability to accurately perceive signals from the body has been shown to be important for physical and psychological health as well as understanding one's emotions. Despite the importance of this skill, often indexed by heartbeat perception accuracy (HBPa), little is known about its malleability. Here, we investigated whether contemplative mental practice can increase HBPa. In the context of a 9-month mental training study, the ReSource Project, two matched cohorts (n = 77 and n = 79) underwent three training modules of 3 months' duration that targeted attentional and interoceptive abilities (Presence module), socio-affective (Affect module), and socio-cognitive (Perspective module) abilities. A third cohort (n = 78) underwent 3 months of practice (Affect module) and a retest control group (n = 84) did not undergo any training. HBPa was measured with a heartbeat tracking task before and after each training module. Emotional awareness was measured by the Toronto Alexithymia Scale (TAS). Participants with TAS scores > 60 at screening were excluded. HBPa was found to increase steadily over the training, with significant and small- to medium-sized effects emerging after 6 months (Cohen's d = .173) and 9 months (d = .273) of mental training. Changes in HBPa were concomitant with and predictive of changes in emotional awareness. Our results suggest that HBPa can indeed be trained through intensive contemplative practice. The effect takes longer than the 8 weeks of typical mindfulness courses to reach meaningful magnitude. These increments in interoceptive accuracy and the related improvements in emotional awareness point to opportunities for improving physical and psychological health through contemplative mental training.

  6. Accuracy Analysis of a Robotic Radionuclide Inspection and Mapping System for Surface Contamination

    SciTech Connect

    Mauer, Georg F.; Kawa, Chris

    2008-01-15

    The mapping of localized regions of radionuclide contamination in a building can be a time consuming and costly task. Humans moving hand-held radiation detectors over the target areas are subject to fatigue. A contamination map based on manual surveys can contain significant operator-induced inaccuracies. A Fanuc M16i light industrial robot has been configured for installation on a mobile aerial work platform, such as a tall forklift. When positioned in front of a wall or floor surface, the robot can map the radiation levels over a surface area of up to 3 m by 3 m. The robot's end effector is a commercial alpha-beta radiation sensor, augmented with range and collision avoidance sensors to ensure operational safety as well as to maintain a constant gap between surface and radiation sensors. The accuracy and repeatability of the robotically conducted contamination surveys is directly influenced by the sensors and other hardware employed. This paper presents an in-depth analysis of various non-contact sensors for gap measurement, and the means to compensate for predicted systematic errors that arise during the area survey scans. The range sensor should maintain a constant gap between the radiation counter and the surface being inspected. The inspection robot scans the wall surface horizontally, moving down at predefined vertical intervals after each scan in a meandering pattern. A number of non-contact range sensors can be employed for the measurement of the gap between the robot end effector and the wall. The nominal gap width was specified as 10 mm, with variations during a single scan not to exceed {+-} 2 mm. Unfinished masonry or concrete walls typically exhibit irregularities, such as holes, gaps, or indentations in mortar joints. These irregularities can be sufficiently large to indicate a change of the wall contour. The responses of different sensor types to the wall irregularities vary, depending on their underlying principles of operation. We explored

  7. Spatio-Temporal Analysis of the Accuracy of Tropical Multisatellite Precipitation Analysis 3B42 Precipitation Data in Mid-High Latitudes of China

    PubMed Central

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998–2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  8. Spatio-temporal analysis of the accuracy of tropical multisatellite precipitation analysis 3B42 precipitation data in mid-high latitudes of China.

    PubMed

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  9. Accuracy analysis of continuous deformation monitoring using BeiDou Navigation Satellite System at middle and high latitudes in China

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Xi, Ruijie; Chen, Hua; Xiao, Yugang

    2017-02-01

    As BeiDou Navigation Satellite System (BDS) has been operational in the whole Asia-Pacific region, it means a new GNSS system with a different satellite orbit structure will become available for deformation monitoring in the future. Conversely, GNSS deformation monitoring data are always processed with a regular interval to form displacement time series for deformation analysis, where the interval can neither be too long from the time perspective nor too short from the precision of determined displacements angle. In this paper, two experimental platforms were designed, with one being at mid-latitude and another at higher latitude in China. BDS data processing software was also developed for investigating the accuracy of continuous deformation monitoring using current in-orbit BDS satellites. Data over 20 days at both platforms were obtained and were processed every 2, 4 and 6 h to generate 3 displacement time series for comparison. The results show that with the current in-orbit BDS satellites, in the mid-latitude area it is easy to achieve accuracy of 1 mm in horizontal component and 2-3 mm in vertical component; the accuracy could be further improved to approximately 1 mm in both horizontal and vertical directions when combined BDS/GPS measurements are employed. At higher latitude, however, the results are not as good as expected due to poor satellite geometry, even the 6 h solutions could only achieve accuracy of 4-6 and 6-10 mm in horizontal and vertical components, respectively, which implies that it may not be applicable to very high-precision deformation monitoring at high latitude using the current BDS. With the integration of BDS and GPS observations, however, in 4-h session, the accuracy can achieve 2 mm in horizontal component and 4 mm in vertical component, which would be an optimal choice for high-accuracy structural deformation monitoring at high latitude.

  10. The analysis of measurement accuracy of the parallel binocular stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Huan; Xing, Tingwen; Jia, Xin

    2016-09-01

    Parallel binocular stereo vision system is a special form of binocular vision system. In order to simulate the human eyes observation state, the two cameras used to obtain images of the target scene are placed parallel to each other. This paper built a triangular geometric model, analyzed the structure parameters of parallel binocular stereo vision system and the correlations between them, and discussed the influences of baseline distance B between two cameras, the focal length f, the angle of view ω and other structural parameters on the accuracy of measurement. This paper used Matlab software to test the error function of parallel binocular stereo vision system under different structure parameters, and the simulation results showed the range of structure parameters when errors were small, thereby improved the accuracy of parallel binocular stereo vision system.

  11. DEM extraction and its accuracy analysis with ground-based SAR interferometry

    NASA Astrophysics Data System (ADS)

    Dong, J.; Yue, J. P.; Li, L. H.

    2014-03-01

    Two altimetry models extracting DEM (Digital Elevation Model) with the GBSAR (Ground-Based Synthetic Aperture Radar) technology are studied and their accuracies are analyzed in detail. The approximate and improved altimetry models of GBSAR were derived from the spaceborne radar altimetry based on the principles of the GBSAR technology. The error caused by the parallel ray approximation in the approximate model was analyzed quantitatively, and the results show that the errors cannot be ignored for the ground-based radar system. For the improved altimetry model, the elevation error expression can be acquired by simulating and analyzing the error propagation coefficients of baseline length, wavelength, differential phase and range distance in the mathematical model. By analyzing the elevation error with the baseline and range distance, the results show that the improved altimetry model is suitable for high-precision DEM and the accuracy can be improved by adjusting baseline and shortening slant distance.

  12. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively.

  13. Georeferencing Accuracy Analysis of a Single WORLDVIEW-3 Image Collected Over Milan

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Roncoroni, F.; Brumana, R.; Previtali, M.

    2016-06-01

    The use of rational functions has become a standard for very high-resolution satellite imagery (VHRSI). On the other hand, the overall geolocalization accuracy via direct georeferencing from on board navigation components is much worse than image ground sampling distance (predicted < 3.5 m CE90 for WorldView-3, whereas GSD = 0.31 m for panchromatic images at nadir). This paper presents the georeferencing accuracy results obtained from a single WorldView-3 image processed with a bias compensated RPC camera model. Orientation results for an image collected over Milan are illustrated and discussed for both direct and indirect georeferencing strategies as well as different bias correction parameters estimated from a set of ground control points. Results highlight that the use of a correction based on two shift parameters is optimal for the considered dataset.

  14. Analysis of "Accuracy evaluation of five blood glucose monitoring systems: the North American comparator trial".

    PubMed

    Fournier, Paul A

    2013-09-01

    In an article in Journal of Diabetes Science and Technology, Halldorsdottir and coauthors examined the accuracy of five blood glucose monitoring systems (BGMSs) in a study sponsored by the manufacturer of the BGMS CONTOUR NEXT EZ (EZ) and found that this BGMS was the most accurate one. However, their findings must be viewed critically given that one of the BGMSs (ACCU-CHEK Aviva) was not compared against the reference measurement specified by its manufacturer, thus making it likely that it performed suboptimally. Also, the accuracy of the glucose-oxidase-based ONE TOUCH Ultra2 and TRUEtrack BGMS is likely to have been underestimated because of the expected low oxygen level in the glycolysed blood samples used to test the performance of these BGMSs under hypoglycemic conditions. In conclusion, although this study shows that EZ is an accurate BGMS, comparisons between this and other BGMSs should be interpreted with caution.

  15. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  16. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute

  17. Real time hybrid simulation with online model updating: An analysis of accuracy

    NASA Astrophysics Data System (ADS)

    Ou, Ge; Dyke, Shirley J.; Prakash, Arun

    2017-02-01

    In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.

  18. Fiber-optical sensor with miniaturized probe head and nanometer accuracy based on spatially modulated low-coherence interferogram analysis.

    PubMed

    Depiereux, Frank; Lehmann, Peter; Pfeifer, Tilo; Schmitt, Robert

    2007-06-10

    Fiber-optical sensors have some crucial advantages compared with rigid optical systems. They allow miniaturization and flexibility of system setups. Nevertheless, optical principles such as low-coherence interferometry can be realized by use of fiber optics. We developed and realized an approach for a fiber-optical sensor, which is based on the analysis of spatially modulated low-coherence interferograms. The system presented consists of three units, a miniaturized sensing probe, a broadband fiber-coupled light source, and an adapted Michelson interferometer, which is used as an optical receiver. Furthermore, the signal processing procedure, which was developed for the interferogram analysis in order to achieve nanometer measurement accuracy, is discussed. A system prototype has been validated thoroughly in different experiments. The results approve the accuracy of the sensor.

  19. Influence of Spatial Resolution in Three-dimensional Cine Phase Contrast Magnetic Resonance Imaging on the Accuracy of Hemodynamic Analysis.

    PubMed

    Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi

    2017-10-10

    We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity.

  20. Multivariate meta-analysis with an increasing number of parameters.

    PubMed

    Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N

    2017-02-14

    Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma.

  1. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  2. Administration of gonadotropin-releasing hormone agonist on Day 5 increases luteal blood flow and improves pregnancy prediction accuracy on Day 14 in recipient Holstein cows

    PubMed Central

    KANAZAWA, Tomomi; SEKI, Motohide; ISHIYAMA, Keiki; ARASEKI, Masao; IZAIKE, Yoshiaki; TAKAHASHI, Toru

    2017-01-01

    This study assessed the effects of gonadotropin-releasing hormone (GnRH) treatment on Day 5 (Day 0 = estrus) on luteal blood flow and accuracy of pregnancy prediction in recipient cows. On Day 5, 120 lactating Holstein cows were randomly assigned to a control group (n = 63) or GnRH group treated with 100 μg of GnRH agonist (n = 57). On Days 3, 5, 7, and 14, each cow underwent ultrasound examination to measure the blood flow area (BFA) and time-averaged maximum velocity (TAMV) at the spiral arteries at the base of the corpus luteum using color Doppler ultrasonography. Cows with a corpus luteum diameter ≥ 20 mm (n = 120) received embryo transfers on Day 7. The BFA values in the GnRH group were significantly higher than those in the control group on Days 7 and 14. TAMV did not differ between these groups. According to receiver operating characteristic analyses to predict pregnancy, a BFA cutoff of 0.52 cm2 yielded the highest sensitivity (83.3%) and specificity (90.5%) on Day 7, and BFA and TAMV values of 0.94 cm2 and 44.93 cm/s, respectively, yielded the highest sensitivity (97.1%) and specificity (100%) on Day 14 in the GnRH group. The areas under the curve for the paired BFA and TAMV in the GnRH group were 0.058 higher than those in the control group (0.996 and 0.938, respectively; P < 0.05). In conclusion, GnRH treatment on Day 5 increased the luteal BFA in recipient cows on Days 7 and 14, and improved the accuracy of pregnancy prediction on Day 14. PMID:28552886

  3. Administration of gonadotropin-releasing hormone agonist on Day 5 increases luteal blood flow and improves pregnancy prediction accuracy on Day 14 in recipient Holstein cows.

    PubMed

    Kanazawa, Tomomi; Seki, Motohide; Ishiyama, Keiki; Araseki, Masao; Izaike, Yoshiaki; Takahashi, Toru

    2017-08-19

    This study assessed the effects of gonadotropin-releasing hormone (GnRH) treatment on Day 5 (Day 0 = estrus) on luteal blood flow and accuracy of pregnancy prediction in recipient cows. On Day 5, 120 lactating Holstein cows were randomly assigned to a control group (n = 63) or GnRH group treated with 100 μg of GnRH agonist (n = 57). On Days 3, 5, 7, and 14, each cow underwent ultrasound examination to measure the blood flow area (BFA) and time-averaged maximum velocity (TAMV) at the spiral arteries at the base of the corpus luteum using color Doppler ultrasonography. Cows with a corpus luteum diameter ≥ 20 mm (n = 120) received embryo transfers on Day 7. The BFA values in the GnRH group were significantly higher than those in the control group on Days 7 and 14. TAMV did not differ between these groups. According to receiver operating characteristic analyses to predict pregnancy, a BFA cutoff of 0.52 cm(2) yielded the highest sensitivity (83.3%) and specificity (90.5%) on Day 7, and BFA and TAMV values of 0.94 cm(2) and 44.93 cm/s, respectively, yielded the highest sensitivity (97.1%) and specificity (100%) on Day 14 in the GnRH group. The areas under the curve for the paired BFA and TAMV in the GnRH group were 0.058 higher than those in the control group (0.996 and 0.938, respectively; P < 0.05). In conclusion, GnRH treatment on Day 5 increased the luteal BFA in recipient cows on Days 7 and 14, and improved the accuracy of pregnancy prediction on Day 14.

  4. Spectrophotometric analysis of color changes in teeth incinerated at increasing temperatures.

    PubMed

    Rubio, Leticia; Sioli, Jose Manuel; Suarez, Juan; Gaitan, Maria Jesus; Martin-de-las-Heras, Stella

    2015-07-01

    Color changes produced by histological alterations in burned teeth can provide conclusive forensic information on the temperature of exposure. The objective was to correlate heat-induced color changes in incinerated teeth with increases in temperature (to 1200°C). Spectrophotometry was used to measure lightness, chromaticity (a* and b*), whiteness, and yellowness in 80 teeth heated at temperatures of 100, 200, 400, 600, 800, 1000, or 1200°C for 60 min. Chromaticity a* was reduced at 100°C and lightness at 200 and 400°C, while chromaticity b* and yellowness were reduced at 400 and 600°C. Higher temperatures (800, 1000, and 1200°C) produced progressive increases in lightness and whiteness but reductions in chromaticity b* and yellowness. The accuracy of color values to determine the temperature of exposure was determined by Receiver Operating Characteristic analysis. High accuracy was shown by lightness, chromaticity b* and yellowness values for temperatures between 800° and 1200°C, by whiteness for temperatures of 1000° and 1200°C, and by lightness for temperatures of 200° and 400°C, with sensitivity and specificity values ranging from 90% to 100%. According to these results, colorimetric analysis of incinerated teeth can be used to estimate the temperature of exposure with high accuracy, with lightness being the most useful variable.

  5. [Accuracy analysis of computer tomography imaging for medical modeling purposes on the example of Siemens Sensation 10 scanner].

    PubMed

    Miechowicz, Sławomir; Urbanik, Andrzej; Chrzan, Robert; Grochowska, Anna

    2010-01-01

    Medical model is a material model of human body part, used for better visualization or surgery planning. It may be produced by Rapid Prototyping method, based on data obtained during medical imaging (computer tomography--CT, magnetic resonance--MR). Important problem is to provide proper spatial accuracy of the model, influenced by imaging accuracy of CT and MR scanners. The aim of the study is the accuracy analysis of CT imaging for medical modeling purposes on the example of Siemens Sensation 10 scanner. Using stereolithography technique a physical pattern--phantom in the form of grating was produced. The phantom was measured by a Coordinate Measuring Machine Leitz PMM 12106 to consider production process inaccuracy. Then the phantom was examined using CT scanner Siemens Sensation 10. Phantom measurement error distribution was determined, based on the data obtained. Maximal measurement error, considering both phantom production inaccuracy and CT imaging inaccuracy was +/- 0.87 mm, while considering only CT imaging inaccuracy was not exceeding 0.28 mm. CT acquisition process is by itself the source of measurement errors. So to provide high quality of medical models produced by Rapid Prototyping methods, it is necessary to perform accuracy measurements for every CT scanner used for obtaing data serving as the base for model production.

  6. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    PubMed

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  7. Accuracy Analysis of Anisotropic Yield Functions based on the Root-Mean Square Error

    SciTech Connect

    Huh, Hoon; Lou, Yanshan; Bae, Gihyun; Lee, Changsoo

    2010-06-15

    This paper evaluates the accuracy of popular anisotropic yield functions based on the root-mean square error (RMSE) of the yield stresses and the R-values. The yield functions include Hill48, Yld89, Yld91, Yld96, Yld2000-2d, BBC2000 and Yld2000-18p yield criteria. Two kind steels and five kind aluminum alloys are selected for the accuracy evaluation. The anisotropic coefficients in yield functions are computed from the experimental data. The downhill simplex method is utilized for the parameter evaluation for the yield function except Hill48 and Yld89 yield functions after the error functions are constructed. The yield stresses and the R-values at every 15 deg. from the rolling direction (RD) and the yield stress and R-value at equibiaxial tension conditions are predicted from each yield function. The predicted yield stresses and R-values are then compared with the experimental data. The root-mean square errors (RMSE) are computed to quantitatively evaluate the yield function. The RMSEs are calculated for the yield stresses and the R-values separately because the yield stress difference is much smaller that the difference in the R-values. The RMSEs of different yield functions are compared for each material. The Hill48 and Yld89 yield functions are the worst choices for the anisotropic description of the yield stress anisotropy while Yld91 yield function is the last choice for the modeling of the R-value directionality. Yld2000-2d and BBC2000 yield function have the same accuracy on the modeling of both the yield stress anisotropy and the R-value anisotropy. The best choice is Yld2000-18 yield function to accurately describe the yield tress and R-value directionalities of sheet metals.

  8. Analysis and enhancement of 3D shape accuracy in a single-shot LIDAR sensor

    NASA Astrophysics Data System (ADS)

    Han, Munhyun; Choi, Gudong; Song, Minhyup; Seo, Hongseok; Mheen, Bongki

    2017-02-01

    The accuracy of timing jitter is of prime importance in the prevalent utilization of Light Detection and Ranging (LiDAR) technology for the real-time high-resolution three-dimensional (3D) image sensor, especially for relatively small object detection in various applications, such as in the fully automated car navigation and military surveillance. To assess the accuracy of timing, that is, the accuracy of the distance or three-dimensional shape, the standard deviation method can be used in the Time-of-Flight (ToF) LiDAR technology. While most timing jitter analyses are mainly based on a fiber-network or open space at a relatively short range distance, more accurate analyses are required to extract more information about the timing jitter at in a 3D image sensor long-range free space conditions for extended LiDAR-related applications. In this paper, utilizing a Single-Shot LiDAR System (SSLs) model with a 400 MHz wideband InGaAs Avalanche Photodiode and a 1550 nm 2 nsec full width at half maximum MOPA fiber laser, we analyzed the precise timing jitter for the implemented SSLs to characterize the measurement results. Additionally, we report the enhanced results for the resolution and precision in the given SSLs using the spline interpolation method from the measured results, and multiple-shot averaging (MSA). Finally, by adapting the proposed method to an implemented high resolution 3D LiDAR prototype, called the STUD LiDAR prototype, which can be understood as one kind of SSLs because it has a single source and a single detector as in a SSLs, we observed and analyzed the 3D resolution enhancement.

  9. Using a Structural Root System Model to Evaluate and Improve the Accuracy of Root Image Analysis Pipelines.

    PubMed

    Lobet, Guillaume; Koevoets, Iko T; Noll, Manuel; Meyer, Patrick E; Tocquin, Pierre; Pagès, Loïc; Périlleux, Claire

    2017-01-01

    Root system analysis is a complex task, often performed with fully automated image analysis pipelines. However, the outcome is rarely verified by ground-truth data, which might lead to underestimated biases. We have used a root model, ArchiSimple, to create a large and diverse library of ground-truth root system images (10,000). For each image, three levels of noise were created. This library was used to evaluate the accuracy and usefulness of several image descriptors classically used in root image analysis softwares. Our analysis highlighted that the accuracy of the different traits is strongly dependent on the quality of the images and the type, size, and complexity of the root systems analyzed. Our study also demonstrated that machine learning algorithms can be trained on a synthetic library to improve the estimation of several root system traits. Overall, our analysis is a call to caution when using automatic root image analysis tools. If a thorough calibration is not performed on the dataset of interest, unexpected errors might arise, especially for large and complex root images. To facilitate such calibration, both the image library and the different codes used in the study have been made available to the community.

  10. The accuracy of presepsin (sCD14-ST) for the diagnosis of sepsis in adults: a meta-analysis.

    PubMed

    Zhang, Xin; Liu, Dan; Liu, You-Ning; Wang, Rui; Xie, Li-Xin

    2015-09-11

    The early diagnosis of sepsis remains a challenge. Recently, soluble cluster of differentiation 14 subtype (sCD14-ST), also known as presepsin, has been identified as a potential biomarker of sepsis. We performed a meta-analysis to assess the diagnostic accuracy of presepsin for sepsis in patients with systemic inflammation. We systematically searched the PubMed, Embase, Web of Knowledge and Cochrane databases. Studies were included if they assessed the diagnostic accuracy of presepsin for sepsis in adult patients with systemic inflammatory response syndrome (SIRS). Furthermore, a 2 × 2 contingency table was constructed based on these results. Two authors independently judged the studies and extracted the data. The diagnostic accuracy of presepsin in sepsis was calculated using a bivariate meta-analysis model. The Q-test and I (2) index were used to test the heterogeneity. Eight studies involving a total of 1,815 patients were included in the present study. The pooled sensitivity, specificity, diagnostic odds ratio, positive likelihood ratio and negative likelihood ratio were 0.86 (95% CI: 0.79-0.91), 0.78 (95% CI: 0.68-0.85), 22 (95% CI: 10-48), 3.8 (95% CI: 2.6-5.7), and 0.18 (95% CI: 0.11-0.28), respectively. The area under the summary receiver operator characteristic curve was 0.89 (95% CI: 0.86-0.92). Meta-regression analysis revealed that consecutive patient selection, sample size and setting significantly accounted for the heterogeneity of sensitivity. Our findings suggest that presepsin exhibits very good diagnostic accuracy (AUC=0.89) for the diagnosis for sepsis. Nevertheless, an overall assessment of all the clinical indexes for sepsis diagnosis and continual re-evaluation of presepsin during the course of the disease are needed.

  11. Meta-analysis of the accuracy of tools used for binary classification when the primary studies employ different references.

    PubMed

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2015-09-01

    The quality of tools used in binary classification is evaluated by studies that assess the accuracy of the classification. The empirical evidence is summarized in 2 × 2 contingency tables. These provide the joint frequencies between the true status of a sample and the classification made by the test. The accuracy of the test is better estimated in a meta-analysis that synthesizes the results of a set of primary studies. The true status is determined by a reference that ideally is a gold standard, which means that it is error free. However, in psychology, it is rare that all the primary studies have employed the same reference, and often they have used an imperfect reference with suboptimal accuracy instead of an actual gold standard. An imperfect reference biases both the estimates of the accuracy of the test and the empirical prevalence of the target status in the primary studies. We discuss several strategies for meta-analysis when different references are employed. Special attention is paid to the simplest case, where the meta-analyst has 1 group of primary studies using a reference that can be considered a gold standard and a 2nd group of primary studies using an imperfect reference. A procedure is recommended in which the frequencies from the primary studies with the imperfect reference are corrected prior to the meta-analysis itself. Then, a hierarchical meta-analytic model is fitted. An example with actual data from SCOFF (Sick-Control-One-Fat-Food; Hill, Reid, Morgan, & Lacey, 2010; Morgan, Reid, & Lacey, 1999) a simple but efficient test for detecting eating disorders, is described. (c) 2015 APA, all rights reserved).

  12. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  13. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley; Leviton, Duoglas

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA s Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  14. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  15. Analysis Article: Accuracy of the DIDGET Glucose Meter in Children and Young Adults with Diabetes

    PubMed Central

    Kim, Sarah

    2011-01-01

    Diabetes is one of the most common chronic diseases among American children. Although studies show that intensive management, including frequent glucose testing, improves diabetes control, this is difficult to accomplish. Bayer's DIDGET® glucose meter system pairs with a popular handheld video game system and couples good blood glucose testing habits with video-game-based rewards. In this issue, Deeb and colleagues performed a study demonstrating the accuracy of the DIDGET meter, a critical asset to this novel product designed to alleviate some of the challenges of managing pediatric diabetes. PMID:22027311

  16. Accuracy of Fecal Immunochemical Tests for Colorectal Cancer: Systematic Review and Meta-analysis

    PubMed Central

    Lee, Jeffrey K.; Liles, Elizabeth G.; Bent, Stephen; Levin, Theodore R.; Corley, Douglas A.

    2014-01-01

    Background Performance characteristics of fecal immunochemical tests (FITs) to screen for colorectal cancer (CRC) have been inconsistent. Purpose To synthesize data about the diagnostic accuracy of FITs for CRC and identify factors affecting its performance characteristics. Data Sources Online databases, including MEDLINE and EMBASE, and bibliographies of included studies from 1996 to 2013. Study Selection All studies evaluating the diagnostic accuracy of FITs for CRC in asymptomatic, average-risk adults. Data Extraction Two reviewers independently extracted data and critiqued study quality. Data Synthesis Nineteen eligible studies were included and meta-analyzed. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of FITs for CRC were 0.79 (95% CI, 0.69 to 0.86), 0.94 (CI, 0.92 to 0.95), 13.10 (CI, 10.49 to 16.35), 0.23 (CI, 0.15 to 0.33), respectively, with an overall diagnostic accuracy of 95% (CI, 93% to 97%). There was substantial heterogeneity between studies in both the pooled sensitivity and specificity estimates. Stratifying by cutoff value for a positive test result or removal of discontinued FIT brands resulted in homogeneous sensitivity estimates. Sensitivity for CRC improved with lower assay cutoff values for a positive test result (for example, 0.89 [CI, 0.80 to 0.95] at a cutoff value less than 20 μg/g vs. 0.70 [CI, 0.55 to 0.81] at cutoff values of 20 to 50 μg/g) but with a corresponding decrease in specificity. A single-sample FIT had similar sensitivity and specificity as several samples, independent of FIT brand. Limitations Only English-language articles were included. Lack of data prevented complete subgroup analyses by FIT brand. Conclusion Fecal immunochemical tests are moderately sensitive, are highly specific, and have high overall diagnostic accuracy for detecting CRC. Diagnostic performance of FITs depends on the cutoff value for a positive test result. Primary Funding Source National Institute

  17. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley; Leviton, Duoglas

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA s Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  18. Preliminary Analysis of Ground-Based Orbit Determination Accuracy for the Wide Field Infrared Survey Telescope (WFIRST)

    NASA Technical Reports Server (NTRS)

    Sease, Bradley; Myers, Jessica; Lorah, John; Webster, Cassandra

    2017-01-01

    The Wide Field Infrared Survey Telescope is a 2.4-meter telescope planned for launch to the Sun-Earth L2 point in 2026. This paper details a preliminary study of the achievable accuracy for WFIRST from ground-based orbit determination routines. The analysis here is divided into two segments. First, a linear covariance analysis of early mission and routine operations provides an estimate of the tracking schedule required to meet mission requirements. Second, a simulated operations'' scenario gives insight into the expected behavior of a daily Extended Kalman Filter orbit estimate over the first mission year given a variety of potential momentum unloading schemes.

  19. Preliminary Analysis of Ground-based Orbit Determination Accuracy for the Wide Field Infrared Survey Telescope (WFIRST)

    NASA Technical Reports Server (NTRS)

    Sease, Brad

    2017-01-01

    The Wide Field Infrared Survey Telescope is a 2.4-meter telescope planned for launch to the Sun-Earth L2 point in 2026. This paper details a preliminary study of the achievable accuracy for WFIRST from ground-based orbit determination routines. The analysis here is divided into two segments. First, a linear covariance analysis of early mission and routine operations provides an estimate of the tracking schedule required to meet mission requirements. Second, a simulated operations scenario gives insight into the expected behavior of a daily Extended Kalman Filter orbit estimate over the first mission year given a variety of potential momentum unloading schemes.

  20. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy

    PubMed Central

    Cook, Michael J; Puri, Basant K

    2016-01-01

    The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID

  1. [Accuracy analysis on a sort of polarized measurement in remote sensing].

    PubMed

    Chen, Li-gang; Hong, Jin; Qiao, Yan-li; Sun, Xiao-bing; Wang, Yuan-jun

    2008-10-01

    Angular error of polarizer in polarimetric measurement is an important element affecting the measurement accuracy of degree of polarization, so angular error of polarizer should be considered in remote sensing of high-accuracy quantitative polarization. Simulation study shows that polarimetric measurement is relative to the polarization state (polarization angle or degree of polarization) of incident light in a specific measurement system of polarization. In the measurement mode of polarizer setting (0 degree, 60 degrees, 120 degrees), there is a maximum error of polarization measurement at the 0 degree or 180 degrees polarization angle while a minimum error at the 30 degrees, 90 degrees and 150 degrees polarization angle; In the measurement mode of polarizer setting (0 degree, 45 degrees, 90 degrees), there is a maximum error of polarization measurement near the 45 degrees polarization angle while a minimum error at the 0 degree, 90 degrees and 135 degrees polarization angle. The larger degree of polarization of incident light often contributes to the bigger measurement error except for incident light with several polarization angles. So the polarization measurement may be evaluated by the average degree of polarizatioo of linearly polarized light introduced in this paper. It is indicated that the measurement mode of polarizer setting (0 degree, 60 degrees, 120 degrees) is better than that of (0 degree, 45 degrees, 90 degrees).

  2. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    PubMed

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  3. Accuracy of pathological diagnosis of mesothelioma cases in Japan: clinicopathological analysis of 382 cases.

    PubMed

    Takeshima, Yukio; Inai, Kouki; Amatya, Vishwa Jeet; Gemba, Kenichi; Aoe, Keisuke; Fujimoto, Nobukazu; Kato, Katsuya; Kishimoto, Takumi

    2009-11-01

    Incidences of mesothelioma are on the rise in Japan. However, the accurate frequency of mesothelioma occurrence is still unknown. The aim of this study is to clarify the accuracy of pathological diagnosis of mesothelioma. Among the 2742 mesothelioma death cases extracted from the document "Vital Statistics of Japan" for 2003-2005, pathological materials were obtained for 382 cases. After these materials were reviewed and immunohistochemical analyses were conducted, mesothelioma was diagnosed by discussions based on clinical and radiological information. Sixty-five cases (17.0%) were categorized as "definitely not/unlikely" mesotheliomas, and 273 cases (71.5%) were categorized as "probable/definite" mesotheliomas. The percentage of "probable/definite" pleural and peritoneal mesothelioma cases in males was 74.3% and 87.5%, respectively, and that of pleural cases in females was 59.2%; however, the percentage of "probable/definite" peritoneal cases in females was only 22.2%. These results suggest that the diagnostic accuracy of mesothelioma is relatively low in females and in cases of peritoneal and sarcomatoid subtype mesotheliomas; furthermore, approximately 15% of cases of deaths due to mesothelioma in Japan are diagnostically suspicious.

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    PubMed Central

    Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824

  5. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    PubMed

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration. Copyright © 2013 John Wiley & Sons, Ltd.

  6. The accuracy comparison between ARFIMA and singular spectrum analysis for forecasting the sales volume of motorcycle in Indonesia

    NASA Astrophysics Data System (ADS)

    Sitohang, Yosep Oktavianus; Darmawan, Gumgum

    2017-08-01

    This research attempts to compare between two forecasting models in time series analysis for predicting the sales volume of motorcycle in Indonesia. The first forecasting model used in this paper is Autoregressive Fractionally Integrated Moving Average (ARFIMA). ARFIMA can handle non-stationary data and has a better performance than ARIMA in forecasting accuracy on long memory data. This is because the fractional difference parameter can explain correlation structure in data that has short memory, long memory, and even both structures simultaneously. The second forecasting model is Singular spectrum analysis (SSA). The advantage of the technique is that it is able to decompose time series data into the classic components i.e. trend, cyclical, seasonal and noise components. This makes the forecasting accuracy of this technique significantly better. Furthermore, SSA is a model-free technique, so it is likely to have a very wide range in its application. Selection of the best model is based on the value of the lowest MAPE. Based on the calculation, it is obtained the best model for ARFIMA is ARFIMA (3, d = 0, 63, 0) with MAPE value of 22.95 percent. For SSA with a window length of 53 and 4 group of reconstructed data, resulting MAPE value of 13.57 percent. Based on these results it is concluded that SSA produces better forecasting accuracy.

  7. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part I. General Guidance and Tips

    PubMed Central

    Kim, Kyung Won; Lee, Juneyoung; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    In the field of diagnostic test accuracy (DTA), the use of systematic review and meta-analyses is steadily increasing. By means of objective evaluation of all available primary studies, these two processes generate an evidence-based systematic summary regarding a specific research topic. The methodology for systematic review and meta-analysis in DTA studies differs from that in therapeutic/interventional studies, and its content is still evolving. Here we review the overall process from a practical standpoint, which may serve as a reference for those who implement these methods. PMID:26576106

  8. An analysis of the accuracy and cost-effectiveness of a cropland inventory utilizing remote sensing techniques

    NASA Technical Reports Server (NTRS)

    Jensen, J. R.; Tinney, L. R.; Estes, J. E.

    1975-01-01

    Cropland inventories utilizing high altitude and Landsat imagery were conducted in Kern County, California. It was found that in terms of the overall mean relative and absolute inventory accuracies, a Landsat multidate analysis yielded the most optimum results, i.e., 98% accuracy. The 1:125,000 CIR high altitude inventory is a serious alternative which can be very accurate (97% or more) if imagery is available for a specific study area. The operational remote sensing cropland inventories documented in this study are considered cost-effective. When compared to conventional survey costs of $62-66 per 10,000 acres, the Landsat and high-altitude inventories required only 3-5% of this amount, i.e., $1.97-2.98.

  9. A preliminary analysis of human factors affecting the recognition accuracy of a discrete word recognizer for C3 systems

    NASA Astrophysics Data System (ADS)

    Yellen, H. W.

    1983-03-01

    Literature pertaining to Voice Recognition abounds with information relevant to the assessment of transitory speech recognition devices. In the past, engineering requirements have dictated the path this technology followed. But, other factors do exist that influence recognition accuracy. This thesis explores the impact of Human Factors on the successful recognition of speech, principally addressing the differences or variability among users. A Threshold Technology T-600 was used for a 100 utterance vocubalary to test 44 subjects. A statistical analysis was conducted on 5 generic categories of Human Factors: Occupational, Operational, Psychological, Physiological and Personal. How the equipment is trained and the experience level of the speaker were found to be key characteristics influencing recognition accuracy. To a lesser extent computer experience, time or week, accent, vital capacity and rate of air flow, speaker cooperativeness and anxiety were found to affect overall error rates.

  10. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2015-08-11

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  11. Empowering Multi-Cohort Gene Expression Analysis to Increase Reproducibility

    PubMed Central

    Haynes, Winston A; Vallania, Francesco; Liu, Charles; Bongen, Erika; Tomczak, Aurelie; Andres-Terrè, Marta; Lofgren, Shane; Tam, Andrew; Deisseroth, Cole A; Li, Matthew D; Sweeney, Timothy E

    2016-01-01

    A major contributor to the scientific reproducibility crisis has been that the results from homogeneous, single-center studies do not generalize to heterogeneous, real world populations. Multi-cohort gene expression analysis has helped to increase reproducibility by aggregating data from diverse populations into a single analysis. To make the multi-cohort analysis process more feasible, we have assembled an analysis pipeline which implements rigorously studied meta-analysis best practices. We have compiled and made publicly available the results of our own multi-cohort gene expression analysis of 103 diseases, spanning 615 studies and 36,915 samples, through a novel and interactive web application. As a result, we have made both the process of and the results from multi-cohort gene expression analysis more approachable for non-technical users. PMID:27896970

  12. Cost analysis can help a group practice increase revenues.

    PubMed

    Migliore, Sherry

    2002-02-01

    Undertaking a cost analysis to determine the cost of providing specific services can help group practices negotiate increased payment and identify areas for cost reduction. An OB/GYN practice in Pennsylvania undertook a cost analysis using the resource-based relative value system. Using data from the cost analysis, the practice was able to negotiate increased payment for some of its services. The practice also was able to target some of its fixed costs for reduction. Another result of the analysis was that the practice was able to focus marketing efforts on some of its most profitable, elective services, thereby increasing revenues. In addition, the practice was able to reduce the provision of unprofitable services.

  13. Accuracy of high b-value diffusion-weighted MRI for prostate cancer detection: a meta-analysis.

    PubMed

    Godley, Keith Craig; Syer, Tom Joseph; Toms, Andoni Paul; Smith, Toby Oliver; Johnson, Glyn; Cameron, Donnie; Malcolm, Paul Napier

    2017-01-01

    Background The diagnostic accuracy of diffusion-weighted imaging (DWI) to detect prostate cancer is well-established. DWI provides visual as well as quantitative means of detecting tumor, the apparent diffusion coefficient (ADC). Recently higher b-values have been used to improve DWI's diagnostic performance. Purpose To determine the diagnostic performance of high b-value DWI at detecting prostate cancer and whether quantifying ADC improves accuracy. Material and Methods A comprehensive literature search of published and unpublished databases was performed. Eligible studies had histopathologically proven prostate cancer, DWI sequences using b-values ≥ 1000 s/mm(2), less than ten patients, and data for creating a 2 × 2 table. Study quality was assessed with QUADAS-2 (Quality Assessment of diagnostic Accuracy Studies). Sensitivity and specificity were calculated and tests for statistical heterogeneity and threshold effect performed. Results were plotted on a summary receiver operating characteristic curve (sROC) and the area under the curve (AUC) determined the diagnostic performance of high b-value DWI. Results Ten studies met eligibility criteria with 13 subsets of data available for analysis, including 522 patients. Pooled sensitivity and specificity were 0.59 (95% confidence interval [CI], 0.57-0.61) and 0.92 (95% CI, 0.91-0.92), respectively, and the sROC AUC was 0.92. Subgroup analysis showed a statistically significant ( P = 0.03) improvement in accuracy when using tumor visual assessment rather than ADC. Conclusion High b-value DWI gives good diagnostic performance for prostate cancer detection and visual assessment of tumor diffusion is significantly more accurate than ROI measurements of ADC.

  14. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  15. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  16. [Situational low self-esteem in pregnant women: an analysis of accuracy].

    PubMed

    Cavalcante, Joyce Carolle Bezerra; de Sousa, Vanessa Emille Carvalho; Lopes, Marcos Venícios de Oliveira

    2012-01-01

    To investigate the accuracy of defining characteristics of Situational low self-esteem we developed a cross-sectional study, with 52 pregnant women assisted in a family centre. The NANDA-I taxonomy was used as well as the Rosenberg's scale. The diagnosis was present in 32.7% of the sample and all characteristics presented statistical significance, except "Reports verbally situational challenge to its own value". The characteristics "Indecisive behavior" and "Helplessness expressions" had 82.35% of sensitivity. On the other hand, the characteristics "Expression of feelings of worthlessness" and "Reports verbally situational challenge to its own value" were the more specific, with 94.29% of specificity. These results can contribute with the nursing practice because the identification of accurate characteristics is essential to a secure inference.

  17. The modified equation approach to the stability and accuracy analysis of finite-difference methods

    NASA Technical Reports Server (NTRS)

    Warming, R. F.; Hyett, B. J.

    1974-01-01

    The stability and accuracy of finite-difference approximations to simple linear partial differential equations are analyzed by studying the modified partial differential equation. Aside from round-off error, the modified equation represents the actual partial differential equation solved when a numerical solution is computed using a finite-difference equation. The modified equation is derived by first expanding each term of a difference scheme in a Taylor series and then eliminating time derivatives higher than first order by certain algebraic manipulations. The connection between 'heuristic' stability theory based on the modified equation approach and the von Neumann (Fourier) method is established. In addition to the determination of necessary and sufficient conditions for computational stability, a truncated version of the modified equation can be used to gain insight into the nature of both dissipative and dispersive errors.

  18. An Autoclavable Steerable Cannula Manual Deployment Device: Design and Accuracy Analysis.

    PubMed

    Burgner, Jessica; Swaney, Philip J; Bruns, Trevor L; Clark, Marlena S; Rucker, D Caleb; Burdette, E Clif; Webster, Robert J

    2012-12-01

    Accessing a specific, predefined location identified in medical images is a common interventional task for biopsies and drug or therapy delivery. While conventional surgical needles provide little steerability, concentric tube continuum devices enable steering through curved trajectories. These devices are usually developed as robotic systems. However, manual actuation of concentric tube devices is particularly useful for initial transfer into the clinic since the Food and Drug Administration (FDA) and Institutional Review Board (IRB) approval process of manually operated devices is simple compared to their motorized counterparts. In this paper, we present a manual actuation device for the deployment of steerable cannulas. The design focuses on compactness, modularity, usability, and sterilizability. Further, the kinematic mapping from joint space to Cartesian space is detailed for an example concentric tube device. Assessment of the device's accuracy was performed in free space, as well as in an image-guided surgery setting, using tracked 2D ultrasound.

  19. Accuracy Analysis for Digitized Sunspot Hand-drawing Records of Purple Mountain Observatory

    NASA Astrophysics Data System (ADS)

    Li, R. Y.; Zhou, T. H.; Ji, K. F.

    2016-05-01

    Sunspot is the most significant feature in solar disk, and the earliest record of the solar activities. It has been systematically observed for about 400 years after the invention of telescopes. The long-term evolution of solar activities, especially the 11 year solar cycle, has been obtained based on these data to a great extent. In recent years, the historical hand-drawing records of sunspot are processed digitally for permanent preserving and computer processing. The hand-drawing records of sunspot will be eventually replaced by the CCD image in the future, therefore it is necessary to evaluate the accuracy of hand-drawing records by comparing them with the CCD image. In this study, 189 digital hand-drawing records of sunspot observed by Purple Mountain Observatory in 2011 are analyzed. The results include: (1) the scanner scale difference between horizontal and vertical directions is 0.2%; (2) the ring of the Sun on the recording paper is not a perfect circle, and the diameter in the east-west direction is 1% shorter than that in the north-south direction; (3) the orientation error of the record paper can reach up to 0.5 degree in scanning. After comparing the sunspot position of hand-drawing records with the simultaneous SDO/HMI (Solar Dynamics Observatory/Helioseismic and Magnetic Imager) images of continuous spectrum by overlapping method, we find that the accuracy of sunspot position in hand-drawing record is about 7 arcsec. There are roughly 3% of the hand-drawing sunspots whose corresponding sunspots in SDO/HMI images can not be found.

  20. A bibliometric analysis of evaluative medical education studies: characteristics and indexing accuracy.

    PubMed

    Sampson, Margaret; Horsley, Tanya; Doja, Asif

    2013-03-01

    To determine the characteristics of medical education studies published in general and internal medicine (GIM) and medical education journals, and to analyze the accuracy of their indexing. The authors identified the five GIM and five medical education journals that published the most articles indexed in MEDLINE as medical education during January 2001 to January 2010. They searched Ovid MEDLINE for evaluative medical education studies published in these journals during this period and classified them as quantitative or qualitative studies according to MEDLINE indexing. They also examined themes and learner levels targeted. Using a random sample of records, they assessed the accuracy of study-type indexing. Of 4,418 records retrieved, 3,853 (87.2%) were from medical education journals and 565 (12.3%) were from GIM journals. Qualitative studies and program evaluations were more prevalent within medical education journals, whereas GIM journals published a higher proportion of clinical trials and systematic reviews (χ=74.28, df=3, P<.001). Medical education journals had a concentration of studies targeting medical students, whereas GIM journals had a concentration targeting residents; themes were similar. The authors confirmed that 170 (56.7%) of the 300 sampled articles were correctly classified in MEDLINE as evaluative studies. The majority of the identified evaluative studies were published in medical education journals, confirming the integrity of medical education as a specialty. Findings concerning the study types published in medical education versus GIM journals are important for medical education researchers who seek to publish outside the field's specialty journals.

  1. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  2. The use of fractal analysis and photometry to estimate the accuracy of bulbar redness grading scales.

    PubMed

    Schulze, Marc M; Hutchings, Natalie; Simpson, Trefford L

    2008-04-01

    To use physical attributes of redness to determine the accuracy of four bulbar redness grading scales, and to cross-calibrate the scales based on these physical measures. Two image-processing metrics, fractal dimension (D) and percentage of pixel coverage (% PC), as well as photometric chromaticity were selected as physical measures, to describe and compare grades of bulbar redness among the McMonnies/Chapman-Davies scale, the Efron Scale, the Institute for Eye Research scale, and a validated scale developed at the Centre for Contact Lens Research. Two sets of images were prepared by using image processing: The first included multiple segments covering the largest possible region of interest (ROI) within the bulbar conjunctiva in the original images; the second contained modified scale images that were matched in size and resolution across scales, and a single, equally-sized ROI. To measure photometric chromaticity, the original scale images were displayed on a computer monitor, and multiple conjunctival segments were analyzed. Pearson correlation coefficients between each set of image metrics and the reference image grades were calculated to determine the accuracy of the scales. Correlations were high between reference image grades and all sets of objective metrics (all Pearson's r >or= 0.88, P

  3. Time Efficiency and Diagnostic Accuracy of New Automated Myocardial Perfusion Analysis Software in 320-Row CT Cardiac Imaging

    PubMed Central

    Rief, Matthias; Stenzel, Fabian; Kranz, Anisha; Schlattmann, Peter

    2013-01-01

    Objective We aimed to evaluate the time efficiency and diagnostic accuracy of automated myocardial computed tomography perfusion (CTP) image analysis software. Materials and Methods 320-row CTP was performed in 30 patients, and analyses were conducted independently by three different blinded readers by the use of two recent software releases (version 4.6 and novel version 4.71GR001, Toshiba, Tokyo, Japan). Analysis times were compared, and automated epi- and endocardial contour detection was subjectively rated in five categories (excellent, good, fair, poor and very poor). As semi-quantitative perfusion parameters, myocardial attenuation and transmural perfusion ratio (TPR) were calculated for each myocardial segment and agreement was tested by using the intraclass correlation coefficient (ICC). Conventional coronary angiography served as reference standard. Results The analysis time was significantly reduced with the novel automated software version as compared with the former release (Reader 1: 43:08 ± 11:39 min vs. 09:47 ± 04:51 min, Reader 2: 42:07 ± 06:44 min vs. 09:42 ± 02:50 min and Reader 3: 21:38 ± 3:44 min vs. 07:34 ± 02:12 min; p < 0.001 for all). Epi- and endocardial contour detection for the novel software was rated to be significantly better (p < 0.001) than with the former software. ICCs demonstrated strong agreement (≥ 0.75) for myocardial attenuation in 93% and for TPR in 82%. Diagnostic accuracy for the two software versions was not significantly different (p = 0.169) as compared with conventional coronary angiography. Conclusion The novel automated CTP analysis software offers enhanced time efficiency with an improvement by a factor of about four, while maintaining diagnostic accuracy. PMID:23323027

  4. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples

  5. Increasing the Accuracy of pH Measurements in Estuarine and Brackish Water: the Need for an Improved Pitzer Model for TRIS in Seawater

    NASA Astrophysics Data System (ADS)

    Turner, D. R.; Gallego, J.

    2016-02-01

    pH measurements in seawater on the total scale are calibrated against TRIS-seawater buffers. The current state of the art for definition of pH in these buffers uses Harned cell measurements. However, while these measurements have only been made in the salinity range 20 - 40, pH measurements in estuarine and brackish waters at lower salinities require the assignment of pH values to low salinity TRIS buffers. The assigned pH values of TRIS-seawater buffers at salinities less than 20 are currently based on (i) interpolation with measurements made in pure TRIS solutions at low ionic strength, and/or (ii) on measurements made in sodium chloride solutions. An additional problem is that as the salinity decreases, the composition of the buffer departs ever more from standard seawater, in order to maintain a minimum concentration of TRIS in the buffer. An alternative approach is to develop a Pitzer model for TRIS in seawater that is sufficiently accurate to reproduce both the Harned cell data in the salinity range 20 - 40, and the low ionic strength TRIS data. This approach would also allow the increasing proportion of TRIS at lower salinities to be treated effectively. In order to test this approach, the MIAMI model for the seawater electrolyte, together with published Pitzer parameters for TRIS, has been used to calculate the Harned cell potentials for TRIS-seawater buffers at 25°C in the salinity range 20 - 40. Comparison with experimental measurements shows significant and consistent offsets. We examine the extent to which this offset can be reduced by re-evaluation of TRIS parameters using updated data on the sodium chloride / water system, and identify priorities for new measurements that would yield increased accuracy of the Pitzer model for TRIS-seawater buffers.

  6. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms

    SciTech Connect

    Kaczmarski, Krzysztof; Guiochon, Georges A

    2009-01-01

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  7. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs.

    PubMed

    Lane, D M; Hill, S A; Huntingford, J L; Lafuente, P; Wall, R; Jones, K A

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman's rho, Cramer's V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters' SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways.

  8. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs

    PubMed Central

    Lane, D.M.; Hill, S.A.; Huntingford, J.L.; Lafuente, P.; Wall, R.; Jones, K.A.

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman’s rho, Cramer’s V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters’ SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways. PMID:26623383

  9. Accuracy of bronchoalveolar lavage enzyme-linked immunospot assay to diagnose smear-negative tuberculosis: a meta-analysis.

    PubMed

    Li, Zhenzhen; Qin, Wenzhe; Li, Lei; Wu, Qin; Chen, Xuerong

    2015-01-01

    While the bronchoalveolar lavage enzyme-linked immunospot assay (BAL-ELISPOT) shows promise for diagnosing smear-negative tuberculosis, its accuracy remains controversial. We meta-analyzed the available evidence to obtain a clearer understanding of the diagnostic accuracy. Studies of the diagnostic performance of ELI-SPOT on smear-negative tuberculosis were identified through systematic searches of the PubMed and EMBASE databases. Pooled data on sensitivity, specificity and other measures of accuracy were meta-analyzed using a random-effects model. Summary receiver operating characteristic curves were used to assess overall test performance. A total of 7 studies were included in the meta-analysis. Diagnostic performance was as follows: sensitivity, 0.89 (95% CI 0.84 to 0.93); specificity, 0.78 (95% CI 0.74 to 0.81); positive likelihood ratio, 4.2 (95% CI 2.42 to 7.28); negative likelihood ratio, 0.14 (95% CI 0.06 to 0.33); diagnostic odds ratio, 36.16 (95% CI 9.70 to 134.73); and area under the curve, 0.9605 (SEM 0.0247). Available evidence suggests that BAL-ELISPOT may perform better than blood-ELISPOT for both screening and confirming a diagnosis of smear-negative tuberculosis. Nevertheless, BAL-ELISPOT should be not used alone but rather in parallel with clinical manifestations and conventional tests to ensure reliable diagnosis.

  10. Analysis of prostate cancer localization toward improved diagnostic accuracy of transperineal prostate biopsy.

    PubMed

    Sakamoto, Yoshiro; Fukaya, Kaori; Haraoka, Masaki; Kitamura, Kosuke; Toyonaga, Yoichiro; Tanaka, Michio; Horie, Shigeo

    2014-09-01

    Delineating the precise localization of prostate cancer is important in improving the diagnostic accuracy of prostate biopsy. In Juntendo University Nerima Hospital, initial 12-core or repeat 16-core biopsies were performed using a transrectal ultrasound guided transperineal prostate biopsy method. We step-sectioned prostates from radical prostatectomy specimens at 5-mm intervals from the urethra to the urinary bladder and designated five regions: the (1) Apex, (2) Apex-Mid, (3) Mid, (4) Mid-Base, and (5) Base. We then mapped prostate cancer localization on eight zones around the urethra for each of those regions. Prostate cancer was detected in 93 cases of 121 cases (76.9%) in the Apex, in 115 cases (95.0%) in the Apex-Mid, in 101 cases (83.5%) in the Mid, in 71 cases (58.7%) in the Mid-Base, and in 23 cases (19.0%) in the Base. In 99.2% of all cases, prostate cancers were detected from the Apex to Mid regions. For this reason, transperineal prostate biopsies have routinely been prioritized in the Apex, Apex-Mid, and Mid regions, while the Base region of the prostate was considered to be of lesser importance. Our analyses of prostate cancer localization revealed a higher rate of cancer in the posterior portion of the Apex, antero-medial and postero-medial portion of the Apex-Mid and antero-medial and postero-lateral portion of the Mid. The transperineal prostate biopsies in our institute performed had a sensitivity of 70.9%, a specificity of 96.6%, a positive predictive value (PPV) of 92.2% and a negative predictive value (NPV) of 85.5%. The concordance of prostate cancer between prostatectomy specimens and biopsies is comparatively favorable. According to our study, the diagnostic accuracy of transperineal prostate biopsy can be improved in our institute by including the anterior portion of the Apex-Mid and Mid regions in the 12-core biopsy or 16-core biopsy, such that a 4-core biopsy of the anterior portion is included.

  11. Stereo image pair's construction and accuracy analysis based on MMT soft-baseline

    NASA Astrophysics Data System (ADS)

    Ou, Jianliang; Zhang, Jinhua; Cao, Bin; Bao, Feng; Wang, Weian

    2009-10-01

    Land based mobile mapping technology (MMT) can collect spatial and attribute data in high efficiency and the data can meet the requirement of 1:2000 scale or even higher accuracy topographic mapping without ground control point. In mobile mapping, direct georeferencing by the integration of GPS and INS/DR provides the mobile platform's continuous position and pose to mapping sensor in field survey, then image stereo of the certain target is used to calculate its 3d global coordinates. But in general, current MMT image stereo is defined as the solid baseline between the different CCD cameras' relative position deviation which is rigid connected to each other and calibrated by high precise control field, and baseline length is quite short (less than 2 meters). This brings some troubles on the far distance target or big size building observation from different viewpoints, and also limits the use of huge amount of MMT measurable images. This paper presented the image stereo pair construction in soft-baseline condition, which is organized through the different imaging time and platform field place but overlapped to the certain target, thus the baseline's accuracy would be some obvious lower than the solid one. We made a brief introduction about Tongji Geo-Informatics MMT firstly, analyzed the image stereo pair from the solid-baseline, and then constructed MMT measurable image into soft-baseline stereo and due mathematical model is expressed. In the paper's experiment part, we analyzed the concrete target 3d solution with the total station, MMT image's solid-baseline stereo and the soft-baseline surveying. The calculation shows that the target 3d coordinates solution in soft-baseline has the same precision as the solid one, also meets the topographic mapping requirement of scale 1:2000. Last, the paper discussed some important influence from the change of angle of two observing bundles in photogrammetric forward intersection and the change sub-deviations of survey error

  12. High-accuracy and long-range Brillouin optical time-domain analysis sensor based on the combination of pulse prepump technique and complementary coding

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Tu, Xiaobo; Lu, Yang; Sun, Shilin; Meng, Zhou

    2016-06-01

    A Brillouin optical time-domain analysis (BOTDA) sensor that combines the conventional complementary coding with the pulse prepump technique for high-accuracy and long-range distributed sensing is implemented and analyzed. The employment of the complementary coding provides an enhanced signal-to-noise ratio (SNR) of the sensing system and an extended sensing distance, and the measurement time is also reduced compared with a BOTDA sensor using linear coding. The combination of pulse prepump technique enables the establishment of a preactivated acoustic field in each pump pulse of the complementary codeword, which ensures measurements of high spatial resolution and high frequency accuracy. The feasibility of the prepumped complementary coding is analyzed theoretically and experimentally. The experiments are carried out beyond 50-km single-mode fiber, and experimental results show the capabilities of the proposed scheme to achieve 1-m spatial resolution with temperature and strain resolutions equal to ˜1.6°C and ˜32 μɛ, and 2-m spatial resolution with temperature and strain resolutions equal to ˜0.3°C and ˜6 μɛ, respectively. A longer sensing distance with the same spatial resolution and measurement accuracy can be achieved through increasing the code length of the prepumped complementary code.

  13. Accuracy of Presurgical Functional MR Imaging for Language Mapping of Brain Tumors: A Systematic Review and Meta-Analysis.

    PubMed

    Weng, Hsu-Huei; Noll, Kyle R; Johnson, Jason M; Prabhu, Sujit S; Tsai, Yuan-Hsiung; Chang, Sheng-Wei; Huang, Yen-Chu; Lee, Jiann-Der; Yang, Jen-Tsung; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh; Hazle, John D; Schomer, Donald F; Liu, Ho-Ling

    2017-10-04

    Purpose To compare functional magnetic resonance (MR) imaging for language mapping (hereafter, language functional MR imaging) with direct cortical stimulation (DCS) in patients with brain tumors and to assess factors associated with its accuracy. Materials and Methods PubMed/MEDLINE and related databases were searched for research articles published between January 2000 and September 2016. Findings were pooled by using bivariate random-effects and hierarchic summary receiver operating characteristic curve models. Meta-regression and subgroup analyses were performed to evaluate whether publication year, functional MR imaging paradigm, magnetic field strength, statistical threshold, and analysis software affected classification accuracy. Results Ten articles with a total of 214 patients were included in the analysis. On a per-patient basis, the pooled sensitivity and specificity of functional MR imaging was 44% (95% confidence interval [CI]: 14%, 78%) and 80% (95% CI: 54%, 93%), respectively. On a per-tag basis (ie, each DCS stimulation site or "tag" was considered a separate data point across all patients), the pooled sensitivity and specificity were 67% (95% CI: 51%, 80%) and 55% (95% CI: 25%, 82%), respectively. The per-tag analysis showed significantly higher sensitivity for studies with shorter functional MR imaging session times (P = .03) and relaxed statistical threshold (P = .05). Significantly higher specificity was found when expressive language task (P = .02), longer functional MR imaging session times (P < .01), visual presentation of stimuli (P = .04), and stringent statistical threshold (P = .01) were used. Conclusion Results of this study showed moderate accuracy of language functional MR imaging when compared with intraoperative DCS, and the included studies displayed significant methodologic heterogeneity. (©) RSNA, 2017 Online supplemental material is available for this article.

  14. Clinicopathological Significance and Diagnostic Accuracy of c-MET Expression by Immunohistochemistry in Gastric Cancer: A Meta-Analysis

    PubMed Central

    Pyo, Jung-Soo; Kang, Guhyun

    2016-01-01

    Purpose The aim of the present study was to elucidate the clinicopathological significance and diagnostic accuracy of immunohistochemistry (IHC) for determining the mesenchymal epidermal transition (c-MET) expression in patients with gastric cancer (GC). Materials and Methods The present meta-analysis investigated the correlation between c-MET expression as determined by IHC and the clinicopathological parameters in 8,395 GC patients from 37 studies that satisfied the eligibility criteria. In addition, a concordance analysis was performed between c-MET expression as determined by IHC and c-MET amplification, and the diagnostic test accuracy was reviewed. Results The estimated rate of c-MET overexpression was 0.403 (95% confidence interval [CI], 0.327~0.484) and it was significantly correlated with male patients, poor differentiation, lymph node metastasis, higher TNM stage, and human epidermal growth factor receptor 2 (HER2) positivity in IHC analysis. There was a significant correlation between c-MET expression and worse overall survival rate (hazard ratio, 1.588; 95% CI, 1.266~1.992). The concordance rates between c-MET expression and c-MET amplification were 0.967 (95% CI, 0.916~0.987) and 0.270 (95% CI, 0.173~0.395) for cases with non-overexpressed and overexpressed c-MET, respectively. In the diagnostic test accuracy review, the pooled sensitivity and specificity were 0.56 (95% CI, 0.50~0.63) and 0.79 (95% CI, 0.77~0.81), respectively. Conclusions The c-MET overexpression as determined by IHC was significantly correlated with aggressive tumor behavior and positive IHC status for HER2 in patients with GC. In addition, the c-MET expression status could be useful in the screening of c-MET amplification in patients with GC. PMID:27752391

  15. A Meta-Analysis of Typhoid Diagnostic Accuracy Studies: A Recommendation to Adopt a Standardized Composite Reference

    PubMed Central

    Storey, Helen L.; Huang, Ying; Crudder, Chris; Golden, Allison; de los Santos, Tala; Hawkins, Kenneth

    2015-01-01

    Novel typhoid diagnostics currently under development have the potential to improve clinical care, surveillance, and the disease burden estimates that support vaccine introduction. Blood culture is most often used as the reference method to evaluate the accuracy of new typhoid tests; however, it is recognized to be an imperfect gold standard. If no single gold standard test exists, use of a composite reference standard (CRS) can improve estimation of diagnostic accuracy. Numerous studies have used a CRS to evaluate new typhoid diagnostics; however, there is no consensus on an appropriate CRS. In order to evaluate existing tests for use as a reference test or inclusion in a CRS, we performed a systematic review of the typhoid literature to include all index/reference test combinations observed. We described the landscape of comparisons performed, showed results of a meta-analysis on the accuracy of the more common combinations, and evaluated sources of variability based on study quality. This wide-ranging meta-analysis suggests that no single test has sufficiently good performance but some existing diagnostics may be useful as part of a CRS. Additionally, based on findings from the meta-analysis and a constructed numerical example demonstrating the use of CRS, we proposed necessary criteria and potential components of a typhoid CRS to guide future recommendations. Agreement and adoption by all investigators of a standardized CRS is requisite, and would improve comparison of new diagnostics across independent studies, leading to the identification of a better reference test and improved confidence in prevalence estimates. PMID:26566275

  16. Comparing the Classification Accuracy among Nonparametric, Parametric Discriminant Analysis and Logistic Regression Methods.

    ERIC Educational Resources Information Center

    Ferrer, Alvaro J. Arce; Wang, Lin

    This study compared the classification performance among parametric discriminant analysis, nonparametric discriminant analysis, and logistic regression in a two-group classification application. Field data from an organizational survey were analyzed and bootstrapped for additional exploration. The data were observed to depart from multivariate…

  17. Analysis of Studies That Compare the Dose Accuracy of Prefilled Insulin Pens

    PubMed Central

    Crowe, Daniel

    2009-01-01

    Multiple industry-sponsored studies have been published about the accuracy of insulin delivery using prefilled insulin pens. Although the study by Weise and colleagues in this issue of Journal of Diabetes Science and Technology found that the Novo Nordisk device was slightly more accurate than the Sanofi-Aventis device, one can anticipate a Sanofi-Aventis-funded study that may find the opposite, because no internationally acceptable, publicly funded, unbiased scientific organization exists to perform head-to-head comparisons of drugs and devices in an evidence-based manner. Currently, clinicians are left on their own to determine whether a study on this topic was conducted in an accurate, unbiased manner. A fundamental redesign of clinical research could reinvigorate and revolutionize the process by which innovations travel from the bench to the bedside. As we anticipate changes in healthcare in the future, it is imperative that the approval and postmarketing surveillance process is revised to support the practice of true evidence-based medicine. PMID:20046659

  18. Accuracy in certification of cause of death in a tertiary care hospital--a retrospective analysis.

    PubMed

    Dash, Shreemanta Kumar; Behera, Basanta Kumar; Patro, Shubhransu

    2014-05-01

    Every physician is duty bound to issue a "Cause of Death" certificate in the unfortunate event death of his/her patient. Incomplete and inaccurate entry in these certificates poses difficulty in obtaining reliable information pertaining to causes of mortality, leads to faulty public health surveillance, and causes hindrance in research. This study intends to evaluate the completeness and accuracy of Medical Certification of Cause of Death in our Institute and to formulate strategy to improve the quality of reporting of cause of death. During the period from January 2012 to December 2012, a total of 151 certificates of cause of death were issued by the faculty members of various departments. Maximum number of death certificates were issued for patients in the extremes of the age <10 years (n = 42, 27.82%) and in >60 years (n = 46, 30.46%). The various inadequacies observed by us are as follows: 40 (26.49%) cases had inaccurate cause of death, interval between onset and terminal event was missing in 94 (62.25%) cases, in 68 (45.03%)cases the seal with registration number of the physician was not available on the certificate, incomplete antecedent & underlying cause of death was found in 35 (23.18%) & 84 (55.63%) cases, in 66 (43.71%) cases there was use of abbreviations and the handwriting was illegible in 79(52.32%) cases.

  19. Accuracy of magnetic resonance in deeply infiltrating endometriosis: a systematic review and meta-analysis.

    PubMed

    Medeiros, Lídia Rossi; Rosa, Maria Inês; Silva, Bruno Rosa; Reis, Maria Eduarda; Simon, Carla Sasso; Dondossola, Eduardo Ronconi; da Cunha Filho, João Sabino

    2015-03-01

    To estimate the accuracy of pelvic magnetic resonance imaging (MRI) in the diagnosis of deeply infiltrating endometriosis (DIE). A comprehensive search of the Medline, Pubmed, Lilacs, Scopus, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Biomed Central, and ISI Web of Science databases was conducted from January 1990 to December 2013. The medical subject headings (MeSHs) and text words "deep endometriosis", "deeply infiltrating endometriosis", "DIE", "magnetic resonance", and "MRI" were searched. Studies that compared the parameters of pelvic MRIs with those of paraffin-embedded sections for the diagnosis of DIE were included. Twenty studies were analyzed, which included 1,819 women. Pooled sensitivity and specificity were calculated across eight subgroups: for all sites, these were 0.83 and 0.90, respectively; for the bladder, 0.64 and 0.98, respectively; for the intestine, 0.84 and 0.97, respectively; for the pouch of Douglas, 0.89 and 0.94, respectively; for the rectosigmoid, 0.83 and 0.88, respectively; for the rectovaginal, 0.77 and 0.95, respectively; for the uterosacral ligaments, 0.85 and 0.80, respectively; and for the vagina and the posterior vaginal fornix, 0.82 and 0.82, respectively. In summary, pelvic MRI is a useful preoperative test for predicting the diagnosis of multiple sites of deep infiltrating endometriosis.

  20. Accuracy analysis of the Null-Screen method for the evaluation of flat heliostats

    NASA Astrophysics Data System (ADS)

    Cebrian-Xochihuila, P.; Huerta-Carranza, O.; Díaz-Uribe, R.

    2016-04-01

    In this work we develop an algorithm to determinate the accuracy of the Null-Screen Method, used for the testing of flat heliostats used as solar concentrators in a central tower configuration. We simulate the i