Science.gov

Sample records for analysis increases accuracy

  1. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  2. Increasing Accuracy and Increasing Tension in Ho

    NASA Astrophysics Data System (ADS)

    Freedman, Wendy L.

    2017-01-01

    The Hubble Constant, Ho, provides a measure of the current expansion rate of the universe. In recent decades, there has been a huge increase in the accuracy with which extragalactic distances, and hence Ho, can be measured. While the historical factor-of-two uncertainty in Ho has been resolved, a new discrepancy has arisen between the values of Ho measured in the local universe, and that estimated from cosmic microwave background measurements, assuming a Lambda cold dark matter model. I will review the advances that have led to the increase in accuracy in measurements of Ho, as well as describe exciting future prospects with the James Webb Space Telescope (JWST) and Gaia, which will make it feasible to measure extragalactic distances at percent-level accuracy in the next decade.

  3. Reporting Data with "Over-the-Counter" Data Analysis Supports Increases Educators' Analysis Accuracy

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant

    2013-01-01

    There is extensive research on the benefits of making data-informed decisions to improve learning, but these benefits rely on the data being effectively interpreted. Despite educators' above-average intellect and education levels, there is evidence many educators routinely misinterpret student data. Data analysis problems persist even at districts…

  4. Post-transcriptional knowledge in pathway analysis increases the accuracy of phenotypes classification

    PubMed Central

    Alaimo, Salvatore; Giugno, Rosalba; Acunzo, Mario; Veneziano, Dario; Ferro, Alfredo; Pulvirenti, Alfredo

    2016-01-01

    Motivation Prediction of phenotypes from high-dimensional data is a crucial task in precision biology and medicine. Many technologies employ genomic biomarkers to characterize phenotypes. However, such elements are not sufficient to explain the underlying biology. To improve this, pathway analysis techniques have been proposed. Nevertheless, such methods have shown lack of accuracy in phenotypes classification. Results Here we propose a novel methodology called MITHrIL (Mirna enrIched paTHway Impact anaLysis) for the analysis of signaling pathways, which extends the work of Tarca et al., 2009. MITHrIL augments pathways with missing regulatory elements, such as microRNAs, and their interactions with genes. The method takes as input the expression values of genes and/or microRNAs and returns a list of pathways sorted according to their degree of deregulation, together with the corresponding statistical significance (p-values). Our analysis shows that MITHrIL outperforms its competitors even in the worst case. In addition, our method is able to correctly classify sets of tumor samples drawn from TCGA. Availability MITHrIL is freely available at the following URL: http://alpha.dmi.unict.it/mithril/ PMID:27275538

  5. Increasing Deception Detection Accuracy with Strategic Questioning

    ERIC Educational Resources Information Center

    Levine, Timothy R.; Shaw, Allison; Shulman, Hillary C.

    2010-01-01

    One explanation for the finding of slightly above-chance accuracy in detecting deception experiments is limited variance in sender transparency. The current study sought to increase accuracy by increasing variance in sender transparency with strategic interrogative questioning. Participants (total N = 128) observed cheaters and noncheaters who…

  6. Joint analysis of psychiatric disorders increases accuracy of risk prediction for schizophrenia, bipolar disorder, and major depressive disorder.

    PubMed

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Coryell, William; Potash, James B; Scheftner, William A; Shi, Jianxin; Weissman, Myrna M; Hultman, Christina M; Landén, Mikael; Levinson, Douglas F; Kendler, Kenneth S; Smoller, Jordan W; Wray, Naomi R; Lee, S Hong

    2015-02-05

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk.

  7. Joint Analysis of Psychiatric Disorders Increases Accuracy of Risk Prediction for Schizophrenia, Bipolar Disorder, and Major Depressive Disorder

    PubMed Central

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Absher, Devin; Agartz, Ingrid; Akil, Huda; Amin, Farooq; Andreassen, Ole A.; Anjorin, Adebayo; Anney, Richard; Arking, Dan E.; Asherson, Philip; Azevedo, Maria H.; Backlund, Lena; Badner, Judith A.; Bailey, Anthony J.; Banaschewski, Tobias; Barchas, Jack D.; Barnes, Michael R.; Barrett, Thomas B.; Bass, Nicholas; Battaglia, Agatino; Bauer, Michael; Bayés, Mònica; Bellivier, Frank; Bergen, Sarah E.; Berrettini, Wade; Betancur, Catalina; Bettecken, Thomas; Biederman, Joseph; Binder, Elisabeth B.; Black, Donald W.; Blackwood, Douglas H.R.; Bloss, Cinnamon S.; Boehnke, Michael; Boomsma, Dorret I.; Breen, Gerome; Breuer, René; Bruggeman, Richard; Buccola, Nancy G.; Buitelaar, Jan K.; Bunney, William E.; Buxbaum, Joseph D.; Byerley, William F.; Caesar, Sian; Cahn, Wiepke; Cantor, Rita M.; Casas, Miguel; Chakravarti, Aravinda; Chambert, Kimberly; Choudhury, Khalid; Cichon, Sven; Cloninger, C. Robert; Collier, David A.; Cook, Edwin H.; Coon, Hilary; Cormand, Bru; Cormican, Paul; Corvin, Aiden; Coryell, William H.; Craddock, Nicholas; Craig, David W.; Craig, Ian W.; Crosbie, Jennifer; Cuccaro, Michael L.; Curtis, David; Czamara, Darina; Daly, Mark J.; Datta, Susmita; Dawson, Geraldine; Day, Richard; De Geus, Eco J.; Degenhardt, Franziska; Devlin, Bernie; Djurovic, Srdjan; Donohoe, Gary J.; Doyle, Alysa E.; Duan, Jubao; Dudbridge, Frank; Duketis, Eftichia; Ebstein, Richard P.; Edenberg, Howard J.; Elia, Josephine; Ennis, Sean; Etain, Bruno; Fanous, Ayman; Faraone, Stephen V.; Farmer, Anne E.; Ferrier, I. Nicol; Flickinger, Matthew; Fombonne, Eric; Foroud, Tatiana; Frank, Josef; Franke, Barbara; Fraser, Christine; Freedman, Robert; Freimer, Nelson B.; Freitag, Christine M.; Friedl, Marion; Frisén, Louise; Gallagher, Louise; Gejman, Pablo V.; Georgieva, Lyudmila; Gershon, Elliot S.; Geschwind, Daniel H.; Giegling, Ina; Gill, Michael; Gordon, Scott D.; Gordon-Smith, Katherine; Green, Elaine K.; Greenwood, Tiffany A.; Grice, Dorothy E.; Gross, Magdalena; Grozeva, Detelina; Guan, Weihua; Gurling, Hugh; De Haan, Lieuwe; Haines, Jonathan L.; Hakonarson, Hakon; Hallmayer, Joachim; Hamilton, Steven P.; Hamshere, Marian L.; Hansen, Thomas F.; Hartmann, Annette M.; Hautzinger, Martin; Heath, Andrew C.; Henders, Anjali K.; Herms, Stefan; Hickie, Ian B.; Hipolito, Maria; Hoefels, Susanne; Holmans, Peter A.; Holsboer, Florian; Hoogendijk, Witte J.; Hottenga, Jouke-Jan; Hultman, Christina M.; Hus, Vanessa; Ingason, Andrés; Ising, Marcus; Jamain, Stéphane; Jones, Ian; Jones, Lisa; Kähler, Anna K.; Kahn, René S.; Kandaswamy, Radhika; Keller, Matthew C.; Kelsoe, John R.; Kendler, Kenneth S.; Kennedy, James L.; Kenny, Elaine; Kent, Lindsey; Kim, Yunjung; Kirov, George K.; Klauck, Sabine M.; Klei, Lambertus; Knowles, James A.; Kohli, Martin A.; Koller, Daniel L.; Konte, Bettina; Korszun, Ania; Krabbendam, Lydia; Krasucki, Robert; Kuntsi, Jonna; Kwan, Phoenix; Landén, Mikael; Långström, Niklas; Lathrop, Mark; Lawrence, Jacob; Lawson, William B.; Leboyer, Marion; Ledbetter, David H.; Lee, Phil H.; Lencz, Todd; Lesch, Klaus-Peter; Levinson, Douglas F.; Lewis, Cathryn M.; Li, Jun; Lichtenstein, Paul; Lieberman, Jeffrey A.; Lin, Dan-Yu; Linszen, Don H.; Liu, Chunyu; Lohoff, Falk W.; Loo, Sandra K.; Lord, Catherine; Lowe, Jennifer K.; Lucae, Susanne; MacIntyre, Donald J.; Madden, Pamela A.F.; Maestrini, Elena; Magnusson, Patrik K.E.; Mahon, Pamela B.; Maier, Wolfgang; Malhotra, Anil K.; Mane, Shrikant M.; Martin, Christa L.; Martin, Nicholas G.; Mattheisen, Manuel; Matthews, Keith; Mattingsdal, Morten; McCarroll, Steven A.; McGhee, Kevin A.; McGough, James J.; McGrath, Patrick J.; McGuffin, Peter; McInnis, Melvin G.; McIntosh, Andrew; McKinney, Rebecca; McLean, Alan W.; McMahon, Francis J.; McMahon, William M.; McQuillin, Andrew; Medeiros, Helena; Medland, Sarah E.; Meier, Sandra; Melle, Ingrid; Meng, Fan; Meyer, Jobst; Middeldorp, Christel M.; Middleton, Lefkos; Milanova, Vihra; Miranda, Ana; Monaco, Anthony P.; Montgomery, Grant W.; Moran, Jennifer L.; Moreno-De-Luca, Daniel; Morken, Gunnar; Morris, Derek W.; Morrow, Eric M.; Moskvina, Valentina; Mowry, Bryan J.; Muglia, Pierandrea; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Murtha, Michael; Myers, Richard M.; Myin-Germeys, Inez; Neale, Benjamin M.; Nelson, Stan F.; Nievergelt, Caroline M.; Nikolov, Ivan; Nimgaonkar, Vishwajit; Nolen, Willem A.; Nöthen, Markus M.; Nurnberger, John I.; Nwulia, Evaristus A.; Nyholt, Dale R.; O’Donovan, Michael C.; O’Dushlaine, Colm; Oades, Robert D.; Olincy, Ann; Oliveira, Guiomar; Olsen, Line; Ophoff, Roel A.; Osby, Urban; Owen, Michael J.; Palotie, Aarno; Parr, Jeremy R.; Paterson, Andrew D.; Pato, Carlos N.; Pato, Michele T.; Penninx, Brenda W.; Pergadia, Michele L.; Pericak-Vance, Margaret A.; Perlis, Roy H.; Pickard, Benjamin S.; Pimm, Jonathan; Piven, Joseph; Posthuma, Danielle; Potash, James B.; Poustka, Fritz; Propping, Peter; Purcell, Shaun M.; Puri, Vinay; Quested, Digby J.; Quinn, Emma M.; Ramos-Quiroga, Josep Antoni; Rasmussen, Henrik B.; Raychaudhuri, Soumya; Rehnström, Karola; Reif, Andreas; Ribasés, Marta; Rice, John P.; Rietschel, Marcella; Ripke, Stephan; Roeder, Kathryn; Roeyers, Herbert; Rossin, Lizzy; Rothenberger, Aribert; Rouleau, Guy; Ruderfer, Douglas; Rujescu, Dan; Sanders, Alan R.; Sanders, Stephan J.; Santangelo, Susan L.; Schachar, Russell; Schalling, Martin; Schatzberg, Alan F.; Scheftner, William A.; Schellenberg, Gerard D.; Scherer, Stephen W.; Schork, Nicholas J.; Schulze, Thomas G.; Schumacher, Johannes; Schwarz, Markus; Scolnick, Edward; Scott, Laura J.; Sergeant, Joseph A.; Shi, Jianxin; Shilling, Paul D.; Shyn, Stanley I.; Silverman, Jeremy M.; Sklar, Pamela; Slager, Susan L.; Smalley, Susan L.; Smit, Johannes H.; Smith, Erin N.; Smoller, Jordan W.; Sonuga-Barke, Edmund J.S.; St Clair, David; State, Matthew; Steffens, Michael; Steinhausen, Hans-Christoph; Strauss, John S.; Strohmaier, Jana; Stroup, T. Scott; Sullivan, Patrick F.; Sutcliffe, James; Szatmari, Peter; Szelinger, Szabocls; Thapar, Anita; Thirumalai, Srinivasa; Thompson, Robert C.; Todorov, Alexandre A.; Tozzi, Federica; Treutlein, Jens; Tzeng, Jung-Ying; Uhr, Manfred; van den Oord, Edwin J.C.G.; Van Grootheest, Gerard; Van Os, Jim; Vicente, Astrid M.; Vieland, Veronica J.; Vincent, John B.; Visscher, Peter M.; Walsh, Christopher A.; Wassink, Thomas H.; Watson, Stanley J.; Weiss, Lauren A.; Weissman, Myrna M.; Werge, Thomas; Wienker, Thomas F.; Wiersma, Durk; Wijsman, Ellen M.; Willemsen, Gonneke; Williams, Nigel; Willsey, A. Jeremy; Witt, Stephanie H.; Wray, Naomi R.; Xu, Wei; Young, Allan H.; Yu, Timothy W.; Zammit, Stanley; Zandi, Peter P.; Zhang, Peng; Zitman, Frans G.; Zöllner, Sebastian; Coryell, William; Potash, James B.; Scheftner, William A.; Shi, Jianxin; Weissman, Myrna M.; Hultman, Christina M.; Landén, Mikael; Levinson, Douglas F.; Kendler, Kenneth S.; Smoller, Jordan W.; Wray, Naomi R.; Lee, S. Hong

    2015-01-01

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk. PMID:25640677

  8. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  9. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  10. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  11. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  12. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  13. Increase in error threshold for quasispecies by heterogeneous replication accuracy

    NASA Astrophysics Data System (ADS)

    Aoki, Kazuhiro; Furusawa, Mitsuru

    2003-09-01

    In this paper we investigate the error threshold for quasispecies with heterogeneous replication accuracy. We show that the coexistence of error-free and error-prone polymerases can greatly increase the error threshold without a catastrophic loss of genetic information. We also show that the error threshold is influenced by the number of replicores. Our research suggests that quasispecies with heterogeneous replication accuracy can reduce the genetic cost of selective evolution while still producing a variety of mutants.

  14. Increasing shape modelling accuracy by adjusting for subject positioning: an application to the analysis of radiographic proximal femur symmetry using data from the Osteoarthritis Initiative.

    PubMed

    Lindner, C; Wallis, G A; Cootes, T F

    2014-04-01

    ) adjusting for subject positioning increases the accuracy of predicting the shape of the contra-lateral hip.

  15. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores.

    PubMed

    Vilhjálmsson, Bjarni J; Yang, Jian; Finucane, Hilary K; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E; Schierup, Mikkel H; De Jager, Philip; Patsopoulos, Nikolaos A; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M; Kraft, Peter; Patterson, Nick; Price, Alkes L

    2015-10-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R(2) increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase.

  16. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores

    PubMed Central

    Vilhjálmsson, Bjarni J.; Yang, Jian; Finucane, Hilary K.; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Ripke, Stephan; Neale, Benjamin M.; Corvin, Aiden; Walters, James T.R.; Farh, Kai-How; Holmans, Peter A.; Lee, Phil; Bulik-Sullivan, Brendan; Collier, David A.; Huang, Hailiang; Pers, Tune H.; Agartz, Ingrid; Agerbo, Esben; Albus, Margot; Alexander, Madeline; Amin, Farooq; Bacanu, Silviu A.; Begemann, Martin; Belliveau, Richard A.; Bene, Judit; Bergen, Sarah E.; Bevilacqua, Elizabeth; Bigdeli, Tim B.; Black, Donald W.; Bruggeman, Richard; Buccola, Nancy G.; Buckner, Randy L.; Byerley, William; Cahn, Wiepke; Cai, Guiqing; Campion, Dominique; Cantor, Rita M.; Carr, Vaughan J.; Carrera, Noa; Catts, Stanley V.; Chambert, Kimberly D.; Chan, Raymond C.K.; Chen, Ronald Y.L.; Chen, Eric Y.H.; Cheng, Wei; Cheung, Eric F.C.; Chong, Siow Ann; Cloninger, C. Robert; Cohen, David; Cohen, Nadine; Cormican, Paul; Craddock, Nick; Crowley, James J.; Curtis, David; Davidson, Michael; Davis, Kenneth L.; Degenhardt, Franziska; Del Favero, Jurgen; DeLisi, Lynn E.; Demontis, Ditte; Dikeos, Dimitris; Dinan, Timothy; Djurovic, Srdjan; Donohoe, Gary; Drapeau, Elodie; Duan, Jubao; Dudbridge, Frank; Durmishi, Naser; Eichhammer, Peter; Eriksson, Johan; Escott-Price, Valentina; Essioux, Laurent; Fanous, Ayman H.; Farrell, Martilias S.; Frank, Josef; Franke, Lude; Freedman, Robert; Freimer, Nelson B.; Friedl, Marion; Friedman, Joseph I.; Fromer, Menachem; Genovese, Giulio; Georgieva, Lyudmila; Gershon, Elliot S.; Giegling, Ina; Giusti-Rodrguez, Paola; Godard, Stephanie; Goldstein, Jacqueline I.; Golimbet, Vera; Gopal, Srihari; Gratten, Jacob; Grove, Jakob; de Haan, Lieuwe; Hammer, Christian; Hamshere, Marian L.; Hansen, Mark; Hansen, Thomas; Haroutunian, Vahram; Hartmann, Annette M.; Henskens, Frans A.; Herms, Stefan; Hirschhorn, Joel N.; Hoffmann, Per; Hofman, Andrea; Hollegaard, Mads V.; Hougaard, David M.; Ikeda, Masashi; Joa, Inge; Julia, Antonio; Kahn, Rene S.; Kalaydjieva, Luba; Karachanak-Yankova, Sena; Karjalainen, Juha; Kavanagh, David; Keller, Matthew C.; Kelly, Brian J.; Kennedy, James L.; Khrunin, Andrey; Kim, Yunjung; Klovins, Janis; Knowles, James A.; Konte, Bettina; Kucinskas, Vaidutis; Kucinskiene, Zita Ausrele; Kuzelova-Ptackova, Hana; Kahler, Anna K.; Laurent, Claudine; Keong, Jimmy Lee Chee; Lee, S. Hong; Legge, Sophie E.; Lerer, Bernard; Li, Miaoxin; Li, Tao; Liang, Kung-Yee; Lieberman, Jeffrey; Limborska, Svetlana; Loughland, Carmel M.; Lubinski, Jan; Lnnqvist, Jouko; Macek, Milan; Magnusson, Patrik K.E.; Maher, Brion S.; Maier, Wolfgang; Mallet, Jacques; Marsal, Sara; Mattheisen, Manuel; Mattingsdal, Morten; McCarley, Robert W.; McDonald, Colm; McIntosh, Andrew M.; Meier, Sandra; Meijer, Carin J.; Melegh, Bela; Melle, Ingrid; Mesholam-Gately, Raquelle I.; Metspalu, Andres; Michie, Patricia T.; Milani, Lili; Milanova, Vihra; Mokrab, Younes; Morris, Derek W.; Mors, Ole; Mortensen, Preben B.; Murphy, Kieran C.; Murray, Robin M.; Myin-Germeys, Inez; Mller-Myhsok, Bertram; Nelis, Mari; Nenadic, Igor; Nertney, Deborah A.; Nestadt, Gerald; Nicodemus, Kristin K.; Nikitina-Zake, Liene; Nisenbaum, Laura; Nordin, Annelie; O’Callaghan, Eadbhard; O’Dushlaine, Colm; O’Neill, F. Anthony; Oh, Sang-Yun; Olincy, Ann; Olsen, Line; Van Os, Jim; Pantelis, Christos; Papadimitriou, George N.; Papiol, Sergi; Parkhomenko, Elena; Pato, Michele T.; Paunio, Tiina; Pejovic-Milovancevic, Milica; Perkins, Diana O.; Pietilinen, Olli; Pimm, Jonathan; Pocklington, Andrew J.; Powell, John; Price, Alkes; Pulver, Ann E.; Purcell, Shaun M.; Quested, Digby; Rasmussen, Henrik B.; Reichenberg, Abraham; Reimers, Mark A.; Richards, Alexander L.; Roffman, Joshua L.; Roussos, Panos; Ruderfer, Douglas M.; Salomaa, Veikko; Sanders, Alan R.; Schall, Ulrich; Schubert, Christian R.; Schulze, Thomas G.; Schwab, Sibylle G.; Scolnick, Edward M.; Scott, Rodney J.; Seidman, Larry J.; Shi, Jianxin; Sigurdsson, Engilbert; Silagadze, Teimuraz; Silverman, Jeremy M.; Sim, Kang; Slominsky, Petr; Smoller, Jordan W.; So, Hon-Cheong; Spencer, Chris C.A.; Stahl, Eli A.; Stefansson, Hreinn; Steinberg, Stacy; Stogmann, Elisabeth; Straub, Richard E.; Strengman, Eric; Strohmaier, Jana; Stroup, T. Scott; Subramaniam, Mythily; Suvisaari, Jaana; Svrakic, Dragan M.; Szatkiewicz, Jin P.; Sderman, Erik; Thirumalai, Srinivas; Toncheva, Draga; Tooney, Paul A.; Tosato, Sarah; Veijola, Juha; Waddington, John; Walsh, Dermot; Wang, Dai; Wang, Qiang; Webb, Bradley T.; Weiser, Mark; Wildenauer, Dieter B.; Williams, Nigel M.; Williams, Stephanie; Witt, Stephanie H.; Wolen, Aaron R.; Wong, Emily H.M.; Wormley, Brandon K.; Wu, Jing Qin; Xi, Hualin Simon; Zai, Clement C.; Zheng, Xuebin; Zimprich, Fritz; Wray, Naomi R.; Stefansson, Kari; Visscher, Peter M.; Adolfsson, Rolf; Andreassen, Ole A.; Blackwood, Douglas H.R.; Bramon, Elvira; Buxbaum, Joseph D.; Børglum, Anders D.; Cichon, Sven; Darvasi, Ariel; Domenici, Enrico; Ehrenreich, Hannelore; Esko, Tonu; Gejman, Pablo V.; Gill, Michael; Gurling, Hugh; Hultman, Christina M.; Iwata, Nakao; Jablensky, Assen V.; Jonsson, Erik G.; Kendler, Kenneth S.; Kirov, George; Knight, Jo; Lencz, Todd; Levinson, Douglas F.; Li, Qingqin S.; Liu, Jianjun; Malhotra, Anil K.; McCarroll, Steven A.; McQuillin, Andrew; Moran, Jennifer L.; Mortensen, Preben B.; Mowry, Bryan J.; Nthen, Markus M.; Ophoff, Roel A.; Owen, Michael J.; Palotie, Aarno; Pato, Carlos N.; Petryshen, Tracey L.; Posthuma, Danielle; Rietschel, Marcella; Riley, Brien P.; Rujescu, Dan; Sham, Pak C.; Sklar, Pamela; St. Clair, David; Weinberger, Daniel R.; Wendland, Jens R.; Werge, Thomas; Daly, Mark J.; Sullivan, Patrick F.; O’Donovan, Michael C.; Kraft, Peter; Hunter, David J.; Adank, Muriel; Ahsan, Habibul; Aittomäki, Kristiina; Baglietto, Laura; Berndt, Sonja; Blomquist, Carl; Canzian, Federico; Chang-Claude, Jenny; Chanock, Stephen J.; Crisponi, Laura; Czene, Kamila; Dahmen, Norbert; Silva, Isabel dos Santos; Easton, Douglas; Eliassen, A. Heather; Figueroa, Jonine; Fletcher, Olivia; Garcia-Closas, Montserrat; Gaudet, Mia M.; Gibson, Lorna; Haiman, Christopher A.; Hall, Per; Hazra, Aditi; Hein, Rebecca; Henderson, Brian E.; Hofman, Albert; Hopper, John L.; Irwanto, Astrid; Johansson, Mattias; Kaaks, Rudolf; Kibriya, Muhammad G.; Lichtner, Peter; Lindström, Sara; Liu, Jianjun; Lund, Eiliv; Makalic, Enes; Meindl, Alfons; Meijers-Heijboer, Hanne; Müller-Myhsok, Bertram; Muranen, Taru A.; Nevanlinna, Heli; Peeters, Petra H.; Peto, Julian; Prentice, Ross L.; Rahman, Nazneen; Sánchez, María José; Schmidt, Daniel F.; Schmutzler, Rita K.; Southey, Melissa C.; Tamimi, Rulla; Travis, Ruth; Turnbull, Clare; Uitterlinden, Andre G.; van der Luijt, Rob B.; Waisfisz, Quinten; Wang, Zhaoming; Whittemore, Alice S.; Yang, Rose; Zheng, Wei; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E.; Schierup, Mikkel H.; De Jager, Philip; Patsopoulos, Nikolaos A.; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M.; Kraft, Peter; Patterson, Nick; Price, Alkes L.

    2015-01-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R2 increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase. PMID:26430803

  17. Using Transponders on the Moon to Increase Accuracy of GPS

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Chui, Talso

    2008-01-01

    It has been proposed to place laser or radio transponders at suitably chosen locations on the Moon to increase the accuracy achievable using the Global Positioning System (GPS) or other satellite-based positioning system. The accuracy of GPS position measurements depends on the accuracy of determination of the ephemerides of the GPS satellites. These ephemerides are determined by means of ranging to and from Earth-based stations and consistency checks among the satellites. Unfortunately, ranging to and from Earth is subject to errors caused by atmospheric effects, notably including unpredictable variations in refraction. The proposal is based on exploitation of the fact that ranging between a GPS satellite and another object outside the atmosphere is not subject to error-inducing atmospheric effects. The Moon is such an object and is a convenient place for a ranging station. The ephemeris of the Moon is well known and, unlike a GPS satellite, the Moon is massive enough that its orbit is not measurably affected by the solar wind and solar radiation. According to the proposal, each GPS satellite would repeatedly send a short laser or radio pulse toward the Moon and the transponder(s) would respond by sending back a pulse and delay information. The GPS satellite could then compute its distance from the known position(s) of the transponder(s) on the Moon. Because the same hemisphere of the Moon faces the Earth continuously, any transponders placed there would remain continuously or nearly continuously accessible to GPS satellites, and so only a relatively small number of transponders would be needed to provide continuous coverage. Assuming that the transponders would depend on solar power, it would be desirable to use at least two transponders, placed at diametrically opposite points on the edges of the Moon disk as seen from Earth, so that all or most of the time, at least one of them would be in sunlight.

  18. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  19. Likable co-witnesses increase eyewitness accuracy and decrease suggestibility.

    PubMed

    Kieckhaefer, Jenna M; Wright, Daniel B

    2015-01-01

    This study examines the impact of likability on memory accuracy and memory conformity between two previously unacquainted individuals. After viewing a crime, eyewitnesses often talk to one another and may find each other likable or dislikable. One hundred twenty-seven undergraduate students arrived at the laboratory with an unknown confederate and were assigned to a likability condition (i.e., control, likable or dislikable). Together, the pair viewed pictures and was then tested on their memory for those pictures in such a way that the participant knew the confederate's response. Thus, the participant's response could be influenced both by his or her own memory and by the answers of the confederate. Participants in the likable condition were more accurate and less influenced by the confederate, compared with the other conditions. Results are discussed in relation to research that shows people are more influenced by friends than strangers and in relation to establishing positive rapport in forensic interviewing.

  20. Interspecies translation of disease networks increases robustness and predictive accuracy.

    PubMed

    Anvar, Seyed Yahya; Tucker, Allan; Vinciotti, Veronica; Venema, Andrea; van Ommen, Gert-Jan B; van der Maarel, Silvere M; Raz, Vered; 't Hoen, Peter A C

    2011-11-01

    Gene regulatory networks give important insights into the mechanisms underlying physiology and pathophysiology. The derivation of gene regulatory networks from high-throughput expression data via machine learning strategies is problematic as the reliability of these models is often compromised by limited and highly variable samples, heterogeneity in transcript isoforms, noise, and other artifacts. Here, we develop a novel algorithm, dubbed Dandelion, in which we construct and train intraspecies Bayesian networks that are translated and assessed on independent test sets from other species in a reiterative procedure. The interspecies disease networks are subjected to multi-layers of analysis and evaluation, leading to the identification of the most consistent relationships within the network structure. In this study, we demonstrate the performance of our algorithms on datasets from animal models of oculopharyngeal muscular dystrophy (OPMD) and patient materials. We show that the interspecies network of genes coding for the proteasome provide highly accurate predictions on gene expression levels and disease phenotype. Moreover, the cross-species translation increases the stability and robustness of these networks. Unlike existing modeling approaches, our algorithms do not require assumptions on notoriously difficult one-to-one mapping of protein orthologues or alternative transcripts and can deal with missing data. We show that the identified key components of the OPMD disease network can be confirmed in an unseen and independent disease model. This study presents a state-of-the-art strategy in constructing interspecies disease networks that provide crucial information on regulatory relationships among genes, leading to better understanding of the disease molecular mechanisms.

  1. Creativity in gifted identification: increasing accuracy and diversity.

    PubMed

    Luria, Sarah R; O'Brien, Rebecca L; Kaufman, James C

    2016-08-01

    Many federal definitions and popular theories of giftedness specify creativity as a core component. Nevertheless, states rely primarily on measures of intelligence for giftedness identification. As minority and culturally diverse students continue to be underrepresented in gifted programs, it is reasonable to ask if increasing the prominence of creativity in gifted identification may help increase balance and equity. In this paper, we explore both layperson and psychometric conceptions of bias and suggest that adding creativity measures to the identification process alleviates both perceptions and the presence of bias. We recognize, however, the logistic and measurement-related challenges to including creativity assessments.

  2. Are the surgeon's movements repeatable? An analysis of the feasibility and expediency of implementing support procedures guiding the surgical tools and increasing motion accuracy during the performance of stereotypical movements by the surgeon.

    PubMed

    Podsędkowski, Leszek Robert; Moll, Jacek; Moll, Maciej; Frącczak, Łukasz

    2014-03-01

    The developments in surgical robotics suggest that it will be possible to entrust surgical robots with a wider range of tasks. So far, it has not been possible to automate the surgery procedures related to soft tissue. Thus, the objective of the conducted studies was to confirm the hypothesis that the surgery telemanipulator can be equipped with certain routines supporting the surgeon in leading the surgical tools and increasing motion accuracy during stereotypical movements. As the first step in facilitating the surgery, an algorithm will be developed which will concurrently provide automation and allow the surgeon to maintain full control over the slave robot. The algorithm will assist the surgeon in performing typical movement sequences. This kind of support must, however, be preceded by determining the reference points for accurately defining the position of the stitched tissue. It is in relation to these points that the tool's trajectory will be created, along which the master manipulator will guide the surgeon's hand. The paper presents the first stage, concerning the selection of movements for which the support algorithm will be used. The work also contains an analysis of surgical movement repeatability. The suturing movement was investigated in detail by experimental research in order to determine motion repeatability and verify the position of the stitched tissue. Tool trajectory was determined by a motion capture stereovision system. The study has demonstrated that the suturing movement could be considered as repeatable; however, the trajectories performed by different surgeons exhibit some individual characteristics.

  3. Using improvement science methods to increase accuracy of surgical consents.

    PubMed

    Mercurio, Patti; Shaffer Ellis, Andrea; Schoettker, Pamela J; Stone, Raymond; Lenk, Mary Anne; Ryckman, Frederick C

    2014-07-01

    The surgical consent serves as a key link in preventing breakdowns in communication that could lead to wrong-patient, wrong-site, or wrong-procedure events. We conducted a quality improvement initiative at a large, urban pediatric academic medical center to reliably increase the percentage of informed consents for surgical and medical procedures with accurate safety data information at the first point of perioperative contact. Improvement activities focused on awareness, education, standardization, real-time feedback and failure identification, and transparency. A total of 54,082 consent forms from 13 surgical divisions were reviewed between May 18, 2011, and November 30, 2012. Between May 2011 and June 2012, the percentage of consents without safety errors increased from a median of 95.4% to 99.7%. Since July 2012, the median has decreased slightly but has remained stable at 99.4%. Our results suggest that effective safety checks allow discovery and prevention of errors.

  4. Analysis of initial orbit determination accuracy

    NASA Astrophysics Data System (ADS)

    Vananti, Alessandro; Schildknecht, Thomas

    The Astronomical Institute of the University of Bern (AIUB) is conducting several search campaigns for orbital debris. The debris objects are discovered during systematic survey observations. In general only a short observation arc, or tracklet, is available for most of these objects. From this discovery tracklet a first orbit determination is computed in order to be able to find the object again in subsequent follow-up observations. The additional observations are used in the orbit improvement process to obtain accurate orbits to be included in a catalogue. In this paper, the accuracy of the initial orbit determination is analyzed. This depends on a number of factors: tracklet length, number of observations, type of orbit, astrometric error, and observation geometry. The latter is characterized by both the position of the object along its orbit and the location of the observing station. Different positions involve different distances from the target object and a different observing angle with respect to its orbital plane and trajectory. The present analysis aims at optimizing the geometry of the discovery observations depending on the considered orbit.

  5. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  6. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  7. Dust trajectory sensor: Accuracy and data analysis

    SciTech Connect

    Xie, J.; Horanyi, M.; Sternovsky, Z.; Gruen, E.; Duncan, N.; Drake, K.; Le, H.; Auer, S.; Srama, R.

    2011-10-15

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Gruen, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Gruen, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1 deg. in direction.

  8. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    NASA Astrophysics Data System (ADS)

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-05-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

  9. Accuracy of thick-target micro-PIXE analysis

    NASA Astrophysics Data System (ADS)

    Campbell, J. L.; Teesdale, W. J.; Wang, J.-X.

    1990-04-01

    The accuracy attainable in micro-PIXE analysis is assessed in terms of the X-ray production model and its assumptions, physical realities of the specimen, the necessary data base, and techniques of standardization. NTIS reference materials are analyzed to provide the experimental tests of accuracy.

  10. Assessing Predictive Accuracy in Discriminant Analysis.

    ERIC Educational Resources Information Center

    Huberty, Carl J.; And Others

    1987-01-01

    Three estimates of the probabilities of correct classification in predictive discriminant analysis were computed using mathematical formulas, resubstitution, and external analyses: (1) optimal hit rate; (2) actual hit rate; and (3) expected actual hit rate. Methods were compared using Monte Carlo sampling from two data sets. (Author/GDC)

  11. Noninvasive Glucose Monitoring: Increasing Accuracy by Combination of Multi-Technology and Multi-Sensors

    PubMed Central

    Harman-Boehm, Ilana; Gal, Avner; Raykhman, Alexander M.; Naidis, Eugene; Mayzel, Yulia

    2010-01-01

    Background The main concern in noninvasive (NI) glucose monitoring methods is to achieve high accuracy results despite the fact that no direct blood or interstitial fluid glucose measurement is performed. An alternative approach to increase the accuracy of NI glucose measurement was previously suggested through a combination of three NI methods: ultrasonic, electromagnetic, and thermal. This paper provides further explanation about the nature of the implemented technologies, and multi-sensors are presented, as well as a detailed elaboration on the novel algorithm for data analysis. Methods Clinical trials were performed on two different days. During the first day, calibration and six subsequent measurements were performed. During the second day, a “full day” session of about 10 hours took place. During the trial, type 1 and 2 diabetes patients were calibrated and evaluated with GlucoTrack® glucose monitor against HemoCue® (Glucose 201+). Results A total of 91 subjects were tested during the trial period. Clarke error grid (CEG) analysis shows 96% of the readings (on both days 1 and 2) fall in the clinically accepted A and B zones, of which 60% are within zone A. The absolute relative differences (ARDs) yield mean and median values of 22.4% and 15.9%, respectively. The CEG for day 2 of the trial shows 96% of the points in zones A and B, with 57% of the values in zone A. Mean and median ARD values for the readings on day 2 are 23.4% and 16.5%, respectively. The intervals between day 1 (calibration and measurements) and day 2 (measurements only) were 1–22 days, with a median of 6 days. Conclusions The presented methodology shows that increased accuracy was indeed achieved by combining multi-technology and multi-sensors. The approach of integration contributes to increasing the signal-to-noise ratio (glucose to other contributors). A combination of several technologies allows compensation of a possible aberration in one modality by the others, while multi

  12. Digital templating in total hip arthroplasty: Additional anteroposterior hip view increases the accuracy

    PubMed Central

    Stigler, Sophia K; Müller, Franz J; Pfaud, Sebastian; Zellner, Michael; Füchtmeier, Bernd

    2017-01-01

    AIM To analyze planning total hip arthroplasty (THA) with an additional anteroposterior hip view may increases the accuracy of preoperative planning in THA. METHODS We conducted prospective digital planning in 100 consecutive patients: 50 of these procedures were planned using pelvic overview only (first group), and the other 50 procedures were planned using pelvic overview plus antero-posterior (a.p.) hip view (second group). The planning and the procedure of each patient were performed exclusively by the senior surgeon. Fifty procedures with retrospective analogues planning were used as the control group (group zero). After the procedure, the planning was compared with the eventually implanted components (cup and stem). For statistic analysis the χ2 test was used for nominal variables and the t test was used for a comparison of continuous variables. RESULTS Preoperative planning with an additional a.p. hip view (second group) significantly increased the exact component correlation when compared to pelvic overview only (first group) for both the acetabular cup and the femoral stem (76% cup and 66% stem vs 54% cup and 32% stem). When considering planning ± 1 size, the accuracy in the second group was 96% (48 of 50 patients) for the cup and 94% for the stem (47 of 50 patients). In the analogue control group (group zero), an exact correlation was observed in only 1/3 of the cases. CONCLUSION Digital THA planning performed by the operating surgeon and based on additional a.p. hip view significantly increases the correlation between preoperative planning and eventual implant sizes. PMID:28144576

  13. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    NASA Astrophysics Data System (ADS)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  14. The Combination of Cyst Fluid Carcinoembryonic Antigen, Cytology and Viscosity Increases the Diagnostic Accuracy of Mucinous Pancreatic Cysts

    PubMed Central

    Oh, Se Hun; Lee, Jong Kyun; Lee, Kyu Taek; Lee, Kwang Hyuck; Woo, Young Sik; Noh, Dong Hyo

    2017-01-01

    Background/Aims The objective of this study was to investigate the value of cyst fluid carcinoembryonic antigen (CEA) in combination with cytology and viscosity for the differential diagnosis of pancreatic cysts. Methods We retrospectively reviewed our data for patients who underwent endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) and cyst fluid analysis. We investigated the sensitivity, specificity and accuracy of the combination of cyst fluid CEA, cytology and viscosity testing. Results A total of 177 patients underwent EUS-FNA and cyst fluid analysis. Of these, 48 subjects were histologically and clinically confirmed to have pancreatic cysts and were therefore included in the analysis. Receiver operator curve analysis demonstrated that the optimal cutoff value of cyst fluid CEA for differentiating mucinous versus nonmucinous cystic lesions was 48.6 ng/mL. The accuracy of cyst fluid CEA (39/48, 81.3%) was greater than the accuracy of cytology (23/45, 51.1%) or the string sign (33/47, 70.2%). Cyst fluid CEA in combination with cytology and string sign assessment exhibited the highest accuracy (45/48, 93.8%). Conclusions Cyst fluid CEA was the most useful single test for identifying mucinous pancreatic cysts. The addition of cytology and string sign assessment to cyst fluid CEA increased the overall accuracy for the diagnosis of mucinous pancreatic cysts. PMID:27609484

  15. The effect of increased ambient lighting on detection accuracy in uniform and anatomical backgrounds

    NASA Astrophysics Data System (ADS)

    Pollard, Benjamin J.; Chawla, Amarpreet S.; Hashimoto, Noriyuki; Samei, Ehsan

    2008-03-01

    Under typical dark conditions found in reading rooms, a reader's pupils will contract and dilate as the visual focus intermittently shifts between the high luminance monitor and the darker background wall, resulting in increased visual fatigue and the degradation of diagnostic performance. A controlled increase of ambient lighting may, however, minimize these visual adjustments and potentially improve reader comfort and accuracy. This paper details results from two psychophysical studies designed to determine the effect of a controlled ambient lighting increase on observer detection of subtle objects and lesions viewed on a DICOM-calibrated medical-grade LCD. The first study examined the effect of increased ambient lighting on detection of subtle objects embedded within a uniform background, while the second study examined observer detection performance of subtle cancerous lesions in mammograms and chest radiographs. In both studies, observers were presented with images under a dark room condition (1 lux) and an increased room illuminance level (50 lux) for which the luminance level of the diffusely reflected light from the background wall was approximately equal to that of the displayed image. The display was calibrated to an effective luminance ratio of 409 for both lighting conditions. Observer detection performance under each room illuminance condition was then compared. Identification of subtle objects embedded within the uniform background improved from 59% to 67%, while detection time decreased slightly with additional illuminance. An ROC analysis of the anatomical image results revealed that observer AUC values remained constant while detection time decreased under increased illuminance. The results provide evidence that an ambient lighting increase may be possible without compromising diagnostic efficacy.

  16. [Design and accuracy analysis of upper slicing system of MSCT].

    PubMed

    Jiang, Rongjian

    2013-05-01

    The upper slicing system is the main components of the optical system in MSCT. This paper focuses on the design of upper slicing system and its accuracy analysis to improve the accuracy of imaging. The error of slice thickness and ray center by bearings, screw and control system were analyzed and tested. In fact, the accumulated error measured is less than 1 microm, absolute error measured is less than 10 microm. Improving the accuracy of the upper slicing system contributes to the appropriate treatment methods and success rate of treatment.

  17. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  18. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    PubMed Central

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-01-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078

  19. The National Shipbuilding Research Program. Process Analysis Via Accuracy Control

    DTIC Science & Technology

    1985-08-01

    Process Analysis Via Accuracy Control U.S. DEPARTMENT OF TRANSPORTATION Maritime Administration in cooperation with Todd Pacific Shipyards...AUG 1985 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE The National Shipbuilding Research Program Process Analysis Via...lighting, retraining work- ers, or other such approaches. This product of A/C is called process or method analysis. Process analysis involves a

  20. Integrating conventional classifiers with a GIS expert system to increase the accuracy of invasive species mapping

    NASA Astrophysics Data System (ADS)

    Masocha, Mhosisi; Skidmore, Andrew K.

    2011-06-01

    Mapping the cover of invasive species using remotely sensed data alone is challenging, because many invaders occur as mid-level canopy species or as subtle understorey species and therefore contribute little to the spectral signatures captured by passive remote sensing devices. In this study, two common non-parametric classifiers namely, the neural network and support vector machine were used to map four cover classes of the invasive shrub Lantana camara in a protected game reserve and the adjacent area under communal land management in Zimbabwe. These classifiers were each combined with a geographic information system (GIS) expert system, in order to test whether the new hybrid classifiers yielded significantly more accurate invasive species cover maps than the single classifiers. The neural network, when used on its own, mapped the cover of L. camara with an overall accuracy of 71% and a Kappa index of agreement of 0.61. When the neural network was combined with an expert system, the overall accuracy and Kappa index of agreement significantly increased to 83% and 0.77, respectively. Similarly, the support vector machine achieved an overall accuracy of 64% with a Kappa index of agreement of 0.52, whereas the hybrid support vector machine and expert system classifier achieved a significantly higher overall accuracy of 76% and a Kappa index of agreement of 0.67. These results suggest that integrating conventional image classifiers with an expert system increases the accuracy of invasive species mapping.

  1. Smoothness-Increasing Accuracy-Conserving (SIAC) Filters for Post-Processing Unstructured Discontinuous Galerkin Fields

    DTIC Science & Technology

    2015-08-27

    levels of smoothness so that commonly used visualization tools can be used appropriately, accurately, and efficiently. The goals of this effort are...to define, investigate, and address the technical obstacles inherent in visualization of data derived from high-order discontinuous Galerkin methods...increasing accuracy-conserving filters (SIAC), visualization , high-order discontinuous Galerkin methods 16. SECURITY CLASSIFICATION OF: 17. LIMITATION

  2. Using Self-Monitoring to Increase Attending to Task and Academic Accuracy in Children with Autism

    ERIC Educational Resources Information Center

    Holifield, Cassandra; Goodman, Janet; Hazelkorn, Michael; Heflin, L. Juane

    2010-01-01

    This study was conducted to investigate the effectiveness of a self-monitoring procedure on increasing attending to task and academic accuracy in two elementary students with autism in their self-contained classroom. A multiple baseline across participants in two academic subject areas was used to assess the effectiveness of the intervention. Both…

  3. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  4. Neutron electric dipole moment and possibilities of increasing accuracy of experiments

    SciTech Connect

    Serebrov, A. P. Kolomenskiy, E. A.; Pirozhkov, A. N.; Krasnoshchekova, I. A.; Vasiliev, A. V.; Polyushkin, A. O.; Lasakov, M. S.; Murashkin, A. N.; Solovey, V. A.; Fomin, A. K.; Shoka, I. V.; Zherebtsov, O. M.; Aleksandrov, E. B.; Dmitriev, S. P.; Dovator, N. A.; Geltenbort, P.; Ivanov, S. N.; Zimmer, O.

    2016-01-15

    The paper reports the results of an experiment on searching for the neutron electric dipole moment (EDM), performed on the ILL reactor (Grenoble, France). The double-chamber magnetic resonance spectrometer (Petersburg Nuclear Physics Institute (PNPI)) with prolonged holding of ultra cold neutrons has been used. Sources of possible systematic errors are analyzed, and their influence on the measurement results is estimated. The ways and prospects of increasing accuracy of the experiment are discussed.

  5. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-03-21

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

  6. Mesoscale modelling methodology based on nudging to increase accuracy in WRA

    NASA Astrophysics Data System (ADS)

    Mylonas Dirdiris, Markos; Barbouchi, Sami; Hermmann, Hugo

    2016-04-01

    The offshore wind energy has recently become a rapidly growing renewable energy resource worldwide, with several offshore wind projects in development in different planning stages. Despite of this, a better understanding of the atmospheric interaction within the marine atmospheric boundary layer (MABL) is needed in order to contribute to a better energy capture and cost-effectiveness. Light has been thrown in observational nudging as it has recently become an innovative method to increase the accuracy of wind flow modelling. This particular study focuses on the observational nudging capability of Weather Research and Forecasting (WRF) and ways the uncertainty of wind flow modelling in the wind resource assessment (WRA) can be reduced. Finally, an alternative way to calculate the model uncertainty is pinpointed. Approach WRF mesoscale model will be nudged with observations from FINO3 at three different heights. The model simulations with and without applying observational nudging will be verified against FINO1 measurement data at 100m. In order to evaluate the observational nudging capability of WRF two ways to derive the model uncertainty will be described: one global uncertainty and an uncertainty per wind speed bin derived using the recommended practice of the IEA in order to link the model uncertainty to a wind energy production uncertainty. This study assesses the observational data assimilation capability of WRF model within the same vertical gridded atmospheric column. The principal aim is to investigate whether having observations up to one height could improve the simulation at a higher vertical level. The study will use objective analysis implementing a Cress-man scheme interpolation to interpolate the observation in time and in sp ace (keeping the horizontal component constant) to the gridded analysis. Then the WRF model core will incorporate the interpolated variables to the "first guess" to develop a nudged simulation. Consequently, WRF with and without

  7. Increased genomic prediction accuracy in wheat breeding through spatial adjustment of field trial data.

    PubMed

    Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav

    2013-12-09

    In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.

  8. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  9. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  10. High frequency rTMS over the left parietal lobule increases non-word reading accuracy.

    PubMed

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-09-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe (IPL), which operates in phonological computation. This study aimed to clarify the specific contribution of IPL and STG to reading aloud and to evaluate the possibility of modulating healthy participants' task performance using high frequency repetitive TMS (hf-rTMS). The main finding is that hf-rTMS over the left IPL improves non-word reading accuracy (fewer errors), whereas hf-rTMS over the right STG selectively decreases text-reading accuracy (more errors). These results confirm the prevalent role of the left IPL in grapheme-to-phoneme conversion. The non-word reading improvement after Left-IPL stimulation provide a direct link between left IPL activation and advantages in sublexical procedures, mainly involved in non-word reading. Results indicate also the specific involvement of STG in reading morphologically complex words and in processing the representation of the text. The text reading impairment after stimulation of the right STG can be interpreted in light of an inhibitory influence on the homologous area. In sum, data document that hf-rTMS is effective in modulating the reading accuracy of expert readers and that the modulation is task related and site specific. These findings suggest new perspectives for the treatment of reading disorders.

  11. Increasing accuracy and precision of digital image correlation through pattern optimization

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Hochhalter, J. D.; Ruggles, T. J.; Cannon, A. H.

    2017-04-01

    The accuracy and precision of digital image correlation (DIC) is based on three primary components: image acquisition, image analysis, and the subject of the image. Focus on the third component, the image subject, has been relatively limited and primarily concerned with comparing pseudo-random surface patterns. In the current work, a strategy is proposed for the creation of optimal DIC patterns. In this strategy, a pattern quality metric is developed as a combination of quality metrics from the literature rather than optimization based on any single one of them. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. Specifically, sum of square of subset intensity gradients (SSSIG) was found to be the metric most strongly correlated to DIC accuracy and thus is the main component of the newly proposed pattern quality metric. A term related to the secondary auto-correlation peak height is also part of the proposed quality metric which effectively acts as a constraint upon SSSIG ensuring that a regular (e.g., checkerboard-type) pattern is not achieved. The combined pattern quality metric is used to generate a pattern that was on average 11.6% more accurate than a randomly generated pattern in a suite of numerical experiments. Furthermore, physical experiments were performed which confirm that there is indeed improvement of a similar magnitude in DIC measurements for the optimized pattern compared to a random pattern.

  12. Low-cost commodity depth sensor comparison and accuracy analysis

    NASA Astrophysics Data System (ADS)

    Breuer, Timo; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    Low cost depth sensors have been a huge success in the field of computer vision and robotics, providing depth images even in untextured environments. The same characteristic applies to the Kinect V2, a time-of-flight camera with high lateral resolution. In order to assess advantages of the new sensor over its predecessor for standard applications, we provide an analysis of measurement noise, accuracy and other error sources with the Kinect V2. We examined the raw sensor data by using an open source driver. Further insights on the sensor design and examples of processing techniques are given to completely exploit the unrestricted access to the device.

  13. Accuracy analysis of pointing control system of solar power station

    NASA Technical Reports Server (NTRS)

    Hung, J. C.; Peebles, P. Z., Jr.

    1978-01-01

    The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.

  14. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  15. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  16. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  17. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in

  18. There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation

    ERIC Educational Resources Information Center

    McKinney, Tracy; Vasquez, Eleazar, III.

    2014-01-01

    Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…

  19. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  20. Accuracy Analysis on Large Blocks of High Resolution Images

    NASA Technical Reports Server (NTRS)

    Passini, Richardo M.

    2007-01-01

    Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.

  1. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios.

  2. Geographic stacking: Decision fusion to increase global land cover map accuracy

    NASA Astrophysics Data System (ADS)

    Clinton, Nicholas; Yu, Le; Gong, Peng

    2015-05-01

    Techniques to combine multiple classifier outputs is an established sub-discipline in data mining, referred to as "stacking," "ensemble classification," or "meta-learning." Here we describe how stacking of geographically allocated classifications can create a map composite of higher accuracy than any of the individual classifiers. We used both voting algorithms and trainable classifiers with a set of validation data to combine individual land cover maps. We describe the generality of this setup in terms of existing algorithms and accuracy assessment procedures. This method has the advantage of not requiring posterior probabilities or level of support for predicted class labels. We demonstrate the technique using Landsat based, 30-meter land cover maps, the highest resolution, globally available product of this kind. We used globally distributed validation samples to composite the maps and compute accuracy. We show that geographic stacking can improve individual map accuracy by up to 6.6%. The voting methods can also achieve higher accuracy than the best of the input classifications. Accuracies from different classifiers, input data, and output type are compared. The results are illustrated on a Landsat scene in California, USA. The compositing technique described here has broad applicability in remote sensing based map production and geographic classification.

  3. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  4. Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2010-01-01

    While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…

  5. Accuracy of clinical techniques for evaluating lower limb sensorimotor functions associated with increased fall risk

    PubMed Central

    Donaghy, Alex; DeMott, Trina; Allet, Lara; Kim, Hogene; Ashton-Miller, James; Richardson, James K.

    2015-01-01

    Background In prior work laboratory-based measures of hip motor function and ankle proprioceptive precision were critical to maintaining unipedal stance and fall/fall-related injury risk. However, the optimal clinical evaluation techniques for predicting these measures are unknown. Objective To evaluate the diagnostic accuracy of common clinical maneuvers in predicting laboratory-based measures of frontal plane hip rate of torque development (HipRTD) and ankle proprioceptive thresholds (AnkPRO) associated with increased fall risk. Design Prospective, observational study. Setting Biomechanical research laboratory. Participants Forty-one older subjects (age 69.1 ± 8.3 years), 25 with varying degrees of diabetic distal symmetric polyneuropathy and 16 without. Assessments Clinical hip strength was evaluated by manual muscle testing (MMT) and lateral plank time (LPT), defined as the number seconds the laterally lying subject could lift hips from the support surface. Foot/ankle evaluation included Achilles reflex, and vibratory, proprioceptive, monofilament, and pinprick sensations at the great toe. Main Outcome Measures HipRTD, abduction and adduction, using a custom whole-body dynamometer. AnkPRO determined with subjects standing using a foot cradle system and a staircase series of 100 frontal plane rotational stimuli. Results Pearson correlation coefficients (r) and receiver operator characteristic (ROC) curves revealed that LPT correlated more strongly with HipRTD (r/p = .61/<.001 and .67/<.001, for abductor/adductor, respectively) than did hip abductor MMT (r/p = .31/.044). Subjects with greater vibratory and proprioceptive sensation, and intact Achilles reflexes, monofilament, and pin sensation had more precise AnkPRO. LPT of < 12 seconds yielded a sensitivity/specificity of 91%/80% for identifying HipRTD < .25 (body size in Newton-meters), and vibratory perception of < 8 seconds yielded a sensitivity/specificity of 94%/80% for the identification of AnkPRO > 1

  6. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  7. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    SciTech Connect

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-17

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  8. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  9. VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data

    DTIC Science & Technology

    2014-11-01

    VA HEALTH CARE Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...00-2014 4. TITLE AND SUBTITLE VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data Why GAO Did This Study In 2013, VA estimated that about 1.5 million

  10. Dual specimens increase the diagnostic accuracy and reduce the reaction duration of rapid urease test

    PubMed Central

    Hsu, Wen-Hung; Wang, Sophie SW; Kuo, Chao-Hung; Chen, Chiao-Yun; Chang, Ching-Wen; Hu, Huang-Ming; Wang, Jaw-Yuan; Yang, Yuan-Chieh; Lin, Yu-Chun; Wang, Wen-Ming; Wu, Deng-Chyang; Wu, Ming-Tsang; Kuo, Fu-Chen

    2010-01-01

    AIM: To evaluate the influence of multiple samplings during esophagogastroduodenoscopy (EGD) on the accuracy of the rapid urease test, and the validity of newly developed rapid urease tests, HelicotecUT plus test and HelicotecUT test, CLO test and ProntoDry test. METHODS: A total of 355 patients undergoing EGD for dyspepsia were included. Their Helicobacter pylori (H. pylori) treatment status was either naïve or eradicated. Six biopsy specimens from antrum and gastric body, respectively, were obtained during EGD. Single antral specimens and dual (antrum + body) specimens were compared. Infection status of H. pylori was evaluated by three different tests: culture, histology, and four different commercially available rapid urease tests (RUTs)-including the newly developed HelicotecUT plus test and HelicotecUT test, and established CLO test and ProntoDry test. H. pylori status was defined as positive when the culture was positive or if there were concordant positive results among histology, CLO test and ProntoDry test. RESULTS: When dual specimens were applied, sensitivity was enhanced and RUT reaction time was significantly reduced, regardless of their treatment status. Thirty minutes were enough to achieve an agreeable positive rate in all the RUTs. Both newly developed RUTs showed comparable sensitivity, specificity and accuracy to the established RUTs, regardless of patient treatment status, RUT reaction duration, and EGD biopsy sites. CONCLUSION: Combination of antrum and body biopsy specimens greatly enhances the sensitivity of rapid urease test and reduces the reaction duration to 30 min. PMID:20556840

  11. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  12. Radiometric and Geometric Accuracy Analysis of Rasat Pan Imagery

    NASA Astrophysics Data System (ADS)

    Kocaman, S.; Yalcin, I.; Guler, M.

    2016-06-01

    RASAT is the second Turkish Earth Observation satellite which was launched in 2011. It operates with pushbroom principle and acquires panchromatic and MS images with 7.5 m and 15 m resolutions, respectively. The swath width of the sensor is 30 km. The main aim of this study is to analyse the radiometric and geometric quality of RASAT images. A systematic validation approach for the RASAT imagery and its products is being applied. RASAT image pair acquired over Kesan city in Edirne province of Turkey are used for the investigations. The raw RASAT data (L0) are processed by Turkish Space Agency (TUBITAK-UZAY) to produce higher level image products. The image products include radiometrically processed (L1), georeferenced (L2) and orthorectified (L3) data, as well as pansharpened images. The image quality assessments include visual inspections, noise, MTF and histogram analyses. The geometric accuracy assessment results are only preliminary and the assessment is performed using the raw images. The geometric accuracy potential is investigated using 3D ground control points extracted from road intersections, which were measured manually in stereo from aerial images with 20 cm resolution and accuracy. The initial results of the study, which were performed using one RASAT panchromatic image pair, are presented in this paper.

  13. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  14. Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting

    PubMed Central

    Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice

    2016-01-01

    Background In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). Methods This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists’ interviews). Results The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8–94.3] versus 24.6% [CI95%15.3–34.4] concordance rates (p<0.01). Conclusion Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients’ drug safety. PMID:26999743

  15. Increasing accuracy and throughput in large-scale microsatellite fingerprinting of cacao field germplasm collections

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Microsatellite-based DNA fingerprinting has been increasingly applied in crop genebank management. However, efficiency and cost-saving remain a major challenge for large scale genotyping, even when middle or high throughput genotyping facility is available. In this study we report on increasing the...

  16. Combining cow and bull reference populations to increase accuracy of genomic prediction and genome-wide association studies.

    PubMed

    Calus, M P L; de Haas, Y; Veerkamp, R F

    2013-10-01

    Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction

  17. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  18. A structured interview guide for global impressions: increasing reliability and scoring accuracy for CNS trials

    PubMed Central

    2013-01-01

    Background The clinical global impression of severity (CGI-S) scale is a frequently used rating instrument for the assessment of global severity of illness in Central Nervous System (CNS) trials. Although scoring guidelines have been proposed to anchor these scores, the collection of sufficient documentation to support the derived score is not part of any standardized interview procedure. It is self evident that the absence of a standardized, documentary format can affect inter-rater reliability and may adversely affect the accuracy of the resulting data. Method We developed a structured interview guide for global impressions (SIGGI) and evaluated the instrument in a 2-visit study of ambulatory patients with Major Depressive Disorder (MDD) or schizophrenia. Blinded, site-independent raters listened to audio recorded SIGGI interviews administered by site-based CGI raters. We compared SIGGI-derived CGI-S scores between the two separate site-based raters and the site-independent raters. Results We found significant intraclass correlations (p = 0.001) on all SIGGI-derived CGI-S scores between two separate site-based CGI raters with each other (r = 0.768) and with a blinded, site-independent rater (r = 0.748 and r = 0.706 respectively) and significant Pearson’s correlations between CGI-S scores with all MADRS validity comparisons for MDD and PANSS comparisons for schizophrenia (p- 0.001 in all cases). Compared to site-based raters, the site-independent raters gave identical “dual” CGI-S scores to 67.6% and 68.2% of subjects at visit 1 and 77.1% at visit 2. Conclusion We suggest that the SIGGI may improve the inter-rater reliability and scoring precision of the CGI-S and have broad applicability in CNS clinical trials. PMID:23369692

  19. Increased Accuracy in the Measurement of the Dielectric Constant of Seawater at 1.413 GHz

    NASA Technical Reports Server (NTRS)

    Zhou, Y.; Lang R.; Drego, C.; Utku, C.; LeVine, D.

    2012-01-01

    This paper describes the latest results for the measurements of the dielectric constant at 1.413 GHz by using a resonant cavity technique. The purpose of these measurements is to develop an accurate relationship for the dependence of the dielectric constant of sea water on temperature and salinity which is needed by the Aquarius inversion algorithm to retrieve salinity. Aquarius is the major instrument on the Aquarius/SAC-D observatory, a NASA/CONAE satellite mission launched in June of20ll with the primary mission of measuring global sea surface salinity to an accuracy of 0.2 psu. Aquarius measures salinity with a 1.413 GHz radiometer and uses a scatterometer to compensate for the effects of surface roughness. The core part of the seawater dielectric constant measurement system is a brass microwave cavity that is resonant at 1.413 GHz. The seawater is introduced into the cavity through a capillary glass tube having an inner diameter of 0.1 mm. The change of resonance frequency and the cavity Q value are used to determine the real and imaginary parts of the dielectric constant of seawater introduced into the thin tube. Measurements are automated with the help of software developed at the George Washington University. In this talk, new results from measurements made since September 2010 will be presented for salinities 30, 35 and 38 psu with a temperature range of O C to 350 C in intervals of 5 C. These measurements are more accurate than earlier measurements made in 2008 because of a new method for measuring the calibration constant using methanol. In addition, the variance of repeated seawater measurements has been reduced by letting the system stabilize overnight between temperature changes. The new results are compared to the Kline Swift and Meissner Wentz model functions. The importance of an accurate model function will be illustrated by using these model functions to invert the Aquarius brightness temperature to get the salinity values. The salinity values

  20. Repeating a Monologue under Increasing Time Pressure: Effects on Fluency, Complexity, and Accuracy

    ERIC Educational Resources Information Center

    Thai, Chau; Boers, Frank

    2016-01-01

    Studies have shown that learners' task performance improves when they have the opportunity to repeat the task. Conditions for task repetition vary, however. In the 4/3/2 activity, learners repeat a monologue under increasing time pressure. The purpose is to foster fluency, but it has been suggested in the literature that it also benefits other…

  1. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    ERIC Educational Resources Information Center

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  2. Increasing the accuracy and automation of fractional vegetation cover estimation from digital photographs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of automated methods to estimate canopy cover (CC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive CC measurements. Wide acceptance has been delayed because of the limitations of these methods. This work introduces a novel ...

  3. Tissue probability map constrained CLASSIC for increased accuracy and robustness in serial image segmentation

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Shen, Dinggang; Wong, Stephen T. C.

    2009-02-01

    Traditional fuzzy clustering algorithms have been successfully applied in MR image segmentation for quantitative morphological analysis. However, the clustering results might be biased due to the variability of tissue intensities and anatomical structures. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serialMR brain image segmentation for longitudinal study of human brains. The tissue probability maps consist of segmentation priors obtained from a population and reflect the probability of different tissue types. More accurate image segmentation can be achieved by using these segmentation priors in the clustering algorithm. Experimental results of both simulated longitudinal MR brain data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using the new serial image segmentation algorithm in the framework of CLASSIC show more accurate and robust longitudinal measures.

  4. Increased ephemeris accuracy using attitude-dependent aerodynamic force coefficients for inertially stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Baker, David F.

    1991-01-01

    The FREEMAC program used to generate the aerodynamic coefficients, as well as associated routines that allow the results to be used in other software is described. These capabilities are applied in two numerical examples to the short-term orbit prediction of the Gamma Ray Observatory (GRO) and Hubble Space Telescope (HST) spacecraft. Predictions using attitude-dependent aerodynamic coefficients were made on a modified version of the PC-based Ephemeris Generation Program (EPHGEN) and were compared to definitive orbit solutions obtained from actual tracking data. The numerical results show improvement in the predicted semi-major axis and along-track positions that would seem to be worth the added computational effort. Finally, other orbit and attitude analysis applications are noted that could profit from using FREEMAC-calculated aerodynamic coefficients, including orbital lifetime studies, orbit determination methods, attitude dynamics simulators, and spacecraft control system component sizing.

  5. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  6. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  7. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation

    PubMed Central

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system. PMID:28269955

  8. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation.

    PubMed

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system.

  9. Corner-corrected diagonal-norm summation-by-parts operators for the first derivative with increased order of accuracy

    NASA Astrophysics Data System (ADS)

    Del Rey Fernández, David C.; Boom, Pieter D.; Zingg, David W.

    2017-02-01

    Combined with simultaneous approximation terms, summation-by-parts (SBP) operators offer a versatile and efficient methodology that leads to consistent, conservative, and provably stable discretizations. However, diagonal-norm operators with a repeating interior-point operator that have thus far been constructed suffer from a loss of accuracy. While on the interior, these operators are of degree 2p, at a number of nodes near the boundaries, they are of degree p, and therefore of global degree p - meaning the highest degree monomial for which the operators are exact at all nodes. This implies that for hyperbolic problems and operators of degree greater than unity they lead to solutions with a global order of accuracy lower than the degree of the interior-point operator. In this paper, we develop a procedure to construct diagonal-norm first-derivative SBP operators that are of degree 2p at all nodes and therefore can lead to solutions of hyperbolic problems of order 2 p + 1. This is accomplished by adding nonzero entries in the upper-right and lower-left corners of SBP operator matrices with a repeating interior-point operator. This modification necessitates treating these new operators as elements, where mesh refinement is accomplished by increasing the number of elements in the mesh rather than increasing the number of nodes. The significant improvements in accuracy of this new family, for the same repeating interior-point operator, are demonstrated in the context of the linear convection equation.

  10. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  11. Increase of Readability and Accuracy of 3d Models Using Fusion of Close Range Photogrammetry and Laser Scanning

    NASA Astrophysics Data System (ADS)

    Gašparović, M.; Malarić, I.

    2012-07-01

    The development of laser scanning technology has opened a new page in geodesy and enabled an entirely new way of presenting data. Products obtained by the method of laser scanning are used in many sciences, as well as in archaeology. It should be noted that 3D models of archaeological artefacts obtained by laser scanning are fully measurable, written in 1:1 scale and have high accuracy. On the other hand, texture and RGB values of the surface of the object obtained by a laser scanner have lower resolution and poorer radiometric characteristics in relation to the textures captured with a digital camera. Scientific research and the goal of this paper are to increase the accuracy and readability of the 3D model with textures obtained with a digital camera. Laser scanning was performed with triangulation scanner of high accuracy, Vivid 9i (Konica Minolta), while for photogrammetric recording digital camera Nikon D90 with a lens of fixed focal length 20 mm, was used. It is important to stress that a posteriori accuracy score of the global registration of point clouds in the form of the standard deviation was ± 0.136 mm while the average distance was only ± 0.080 mm. Also research has proven that the quality projection texture model increases readability. Recording of archaeological artefacts and making their photorealistic 3D model greatly contributes to archaeology as a science, accelerates processing and reconstruction of the findings. It also allows the presentation of findings to the general public, not just to the experts.

  12. Coupled Loads Analysis Accuracy from the Space Vehicle Perspective

    NASA Astrophysics Data System (ADS)

    Dickens, J. M.; Wittbrodt, M. J.; Gate, M. M.; Li, L. H.; Stroeve, A.

    2001-01-01

    Coupled loads analysis (CLA) consists of performing a structural response analysis, usually a time-history response analysis, with reduced dynamic models typically provided by two different companies to obtain the coupled response of a launch vehicle and space vehicle to the launching and staging events required to place the space vehicle into orbit. The CLA is performed by the launch vehicle contractor with a reduced dynamics mathematical model that is coupled to the launch vehicle, or booster, model to determine the coupled loads for each substructure. Recently, the booster and space vehicle contractors have been from different countries. Due to the language differences and governmental restrictions, the verification of the CLA is much more difficult than when working with launch vehicle and space vehicle contractors of the same country. This becomes exceedingly clear when the CLA analysis results do not seem to pass an intuitive judgement. Presented in the sequel are three checks that a space vehicle contractor can perform on the results of a coupled loads analysis to partially verify the analysis.

  13. Using Meta-Analysis to Inform the Design of Subsequent Studies of Diagnostic Test Accuracy

    ERIC Educational Resources Information Center

    Hinchliffe, Sally R.; Crowther, Michael J.; Phillips, Robert S.; Sutton, Alex J.

    2013-01-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial…

  14. Tourniquet Test for Dengue Diagnosis: Systematic Review and Meta-analysis of Diagnostic Test Accuracy

    PubMed Central

    Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C.

    2016-01-01

    Background Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). Purpose To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. Data Sources A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Study Selection Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Data Extraction Two independent authors extracted data using a standardized form. Data Synthesis A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66–0.74). Conclusion The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. Registration The protocol for this systematic review was registered at PROSPERO: CRD42015020323. PMID:27486661

  15. The urine dipstick test useful to rule out infections. A meta-analysis of the accuracy

    PubMed Central

    Devillé, Walter LJM; Yzermans, Joris C; van Duijn, Nico P; Bezemer, P Dick; van der Windt, Daniëlle AWM; Bouter, Lex M

    2004-01-01

    Background Many studies have evaluated the accuracy of dipstick tests as rapid detectors of bacteriuria and urinary tract infections (UTI). The lack of an adequate explanation for the heterogeneity of the dipstick accuracy stimulates an ongoing debate. The objective of the present meta-analysis was to summarise the available evidence on the diagnostic accuracy of the urine dipstick test, taking into account various pre-defined potential sources of heterogeneity. Methods Literature from 1990 through 1999 was searched in Medline and Embase, and by reference tracking. Selected publications should be concerned with the diagnosis of bacteriuria or urinary tract infections, investigate the use of dipstick tests for nitrites and/or leukocyte esterase, and present empirical data. A checklist was used to assess methodological quality. Results 70 publications were included. Accuracy of nitrites was high in pregnant women (Diagnostic Odds Ratio = 165) and elderly people (DOR = 108). Positive predictive values were ≥80% in elderly and in family medicine. Accuracy of leukocyte-esterase was high in studies in urology patients (DOR = 276). Sensitivities were highest in family medicine (86%). Negative predictive values were high in both tests in all patient groups and settings, except for in family medicine. The combination of both test results showed an important increase in sensitivity. Accuracy was high in studies in urology patients (DOR = 52), in children (DOR = 46), and if clinical information was present (DOR = 28). Sensitivity was highest in studies carried out in family medicine (90%). Predictive values of combinations of positive test results were low in all other situations. Conclusions Overall, this review demonstrates that the urine dipstick test alone seems to be useful in all populations to exclude the presence of infection if the results of both nitrites and leukocyte-esterase are negative. Sensitivities of the combination of both tests vary between 68 and 88% in

  16. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    PubMed

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-10-29

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy.

  17. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID) Tracking Accuracy with the Kriging Method

    PubMed Central

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  18. Knowledge and accuracy of perceived personal risk in underserved women who are at increased risk of breast cancer.

    PubMed

    Cyrus-David, Mfon S

    2010-12-01

    The state of knowledge and personal risk perception among women who are underserved or racial minorities at increased risk of breast cancer (BC) who may be eligible for chemoprevention is limited. The BC knowledge and accuracy of perceived personal risk of a cross-sectional study population of such women residing in the greater Houston Texas area were assessed. The majority had below average knowledge scores and perceived risk inaccurately. The lesser educated were also less knowledgeable. Educational interventions targeted towards this population would enhance their knowledge of BC and empower them to make informed decisions about BC chemoprevention.

  19. Finite element analysis accuracy of the GTC commissioning instrument structure

    NASA Astrophysics Data System (ADS)

    Farah, Alejandro; Godoy, Javier; Velazquez, F.; Espejo, Carlos; Cuevas, Salvador; Bringas, Vicente; Manzo, A.; del Llano, L.; Sanchez, J. L.; Chavoya, Armando; Devaney, Nicholas; Castro, Javier; Cavaller, Luis

    2003-02-01

    Under a contract with the GRANTECAN, the Commissioning Instrument (CI) is a project developed by a team of Mexican scientists and engineers from the Instrumentation Department of the Astronomy Institute at the UNAM and the CIDESI Engineering Center. The CI will verify the Gran Telescopio Canarias (GTC) performance during the commissioning phase between First Light and Day One. The design phase is now completed and the project is currently in the manufacturing phase. The CI main goal is to measure the telescope image quality. To obtain a stable high resolution image, the mechanical structures should be as rigid as possible. This paper describes the several steps of the conceptual design and the Finite Element Analysis (FEA) for the CI mechanical structures. A variety of models were proposed. The FEA was useful to evaluate the displacements, shape modes, weight, and thermal expansions of each model. A set of indicators were compared with decision matrixes. The best performance models were subjected to a re-optimization stage. By applying the same decision method, a CI Structure Model was proposed. The FEA results complied with all the instruments specifications. Displacements values and vibration frequencies are reported.

  20. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  1. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    NASA Astrophysics Data System (ADS)

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-05-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  2. Navigation accuracy analysis for the Halley flyby phase of a dual comet mission using ion drive

    NASA Technical Reports Server (NTRS)

    Wood, L. J.; Hast, S. L.

    1980-01-01

    A dual comet (Halley Flyby/Tempel 2 Rendezvous) mission, making use of the solar electric propulsion system, is under consideration for a 1985 launch. This paper presents navigation accuracy analysis results for the Halley flyby phase of this mission. Orbit determination and guidance accuracies are presented for the baseline navigation strategy, along with the results of a number of sensitivity studies involving parameters such as data frequencies, data accuracies, ion drive thrust vector errors, comet ephemeris uncertainties, time lags associated with data processing and command sequence generation, probe release time, and navigation coast arc duration.

  3. Acousto-optical pulsar processor frequency scale calibration for increase accuracy measurement of time of arrival radioemission impulses

    NASA Astrophysics Data System (ADS)

    Esepkina, Nelli A.; Lavrov, Aleksandr P.; Molodyakov, Sergey A.

    2006-04-01

    The acousto-optical processor (AOP) is based on an acousto-optical spectrum analyzer with a CCD photodetector operating in special pipeline mode (shift-and-add mode), which allows spectral components of the input signal to be added with controlled time delay immediately in the CCD photodetector. The proposed AOP was successfully used on radiotelescope RT-64 (Kalyazin Radio Astronomy Observatory FIAN) for the observation of pulsars at 1 .4 GHz in 45 MHz bandwidth. The AOP frequency scale calibration allows increasing accuracy of measurement of time of arrival radioemission pulses. Experimental results on investigation of AOP work on RT-64 and radioemission pulses profiles for pulsar PSR 1937+21 are submitted.

  4. Analysis of proctor marking accuracy in a computer-aided personalized system of instruction course.

    PubMed

    Martin, Toby L; Pear, Joseph J; Martin, Garry L

    2002-01-01

    In a computer-aided version of Keller's personalized system of instruction (CAPSI), students within a course were assigned by a computer to be proctors for tests. Archived data from a CAPSI-taught behavior modification course were analyzed to assess proctor accuracy in marking answers as correct or incorrect. Overall accuracy was increased by having each test marked independently by two proctors, and was higher on incorrect answers when the degree of incorrectness was larger.

  5. Long-term deflections of reinforced concrete elements: accuracy analysis of predictions by different methods

    NASA Astrophysics Data System (ADS)

    Gribniak, Viktor; Bacinskas, Darius; Kacianauskas, Rimantas; Kaklauskas, Gintaris; Torres, Lluis

    2013-08-01

    Long-term deflection response of reinforced concrete flexural members is influenced by the interaction of complex physical phenomena, such as concrete creep, shrinkage and cracking, which makes their prediction difficult. A number of approaches are proposed by design codes with different degrees of simplification and accuracy. This paper statistically investigates accuracy of long-term deflection predictions made by some of the most widely used design codes ( Eurocode 2, ACI 318, ACI 435, and the new Russian code SP 52-101) and a numerical technique proposed by the authors. The accuracy is analyzed using test data of 322 reinforced concrete members from 27 test programs reported in the literature. The predictions of each technique are discussed, and a comparative analysis is made showing the influence of different parameters, such as sustained loading duration, compressive strength of concrete, loading intensity and reinforcement ratio, on the prediction accuracy.

  6. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  7. Accuracy of mucocutaneous leishmaniasis diagnosis using polymerase chain reaction: systematic literature review and meta-analysis

    PubMed Central

    Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro

    2015-01-01

    The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238

  8. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  9. Accuracy analysis of the space shuttle solid rocket motor profile measuring device

    NASA Technical Reports Server (NTRS)

    Estler, W. Tyler

    1989-01-01

    The Profile Measuring Device (PMD) was developed at the George C. Marshall Space Flight Center following the loss of the Space Shuttle Challenger. It is a rotating gauge used to measure the absolute diameters of mating features of redesigned Solid Rocket Motor field joints. Diameter tolerance of these features are typically + or - 0.005 inches and it is required that the PMD absolute measurement uncertainty be within this tolerance. In this analysis, the absolute accuracy of these measurements were found to be + or - 0.00375 inches, worst case, with a potential accuracy of + or - 0.0021 inches achievable by improved temperature control.

  10. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  11. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. R.; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the missions commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  12. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. Russell; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the mission's commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  13. A meta-analysis of confidence and judgment accuracy in clinical decision making.

    PubMed

    Miller, Deborah J; Spengler, Elliot S; Spengler, Paul M

    2015-10-01

    The overconfidence bias occurs when clinicians overestimate the accuracy of their clinical judgments. This bias is thought to be robust leading to an almost universal recommendation by clinical judgment scholars for clinicians to temper their confidence in clinical decision making. An extension of the Meta-Analysis of Clinical Judgment (Spengler et al., 2009) project, the authors synthesized over 40 years of research from 36 studies, from 1970 to 2011, in which the confidence ratings of 1,485 clinicians were assessed in relation to the accuracy of their judgments about mental health (e.g., diagnostic decision making, violence risk assessment, prediction of treatment failure) or psychological issues (e.g., personality assessment). Using a random effects model a small but statistically significant effect (r = .15; CI = .06, .24) was found showing that confidence is better calibrated with accuracy than previously assumed. Approximately 50% of the total variance between studies was due to heterogeneity and not to chance. Mixed effects and meta-regression moderator analyses revealed that confidence is calibrated with accuracy least when there are repeated judgments, and more when there are higher base rate problems, when decisions are made with written materials, and for earlier published studies. Sensitivity analyses indicate a bias toward publishing smaller sample studies with smaller or negative confidence-accuracy effects. Implications for clinical judgment research and for counseling psychology training and practice are discussed.

  14. Accuracy of Pseudo-Inverse Covariance Learning--A Random Matrix Theory Analysis.

    PubMed

    Hoyle, David C

    2011-07-01

    For many learning problems, estimates of the inverse population covariance are required and often obtained by inverting the sample covariance matrix. Increasingly for modern scientific data sets, the number of sample points is less than the number of features and so the sample covariance is not invertible. In such circumstances, the Moore-Penrose pseudo-inverse sample covariance matrix, constructed from the eigenvectors corresponding to nonzero sample covariance eigenvalues, is often used as an approximation to the inverse population covariance matrix. The reconstruction error of the pseudo-inverse sample covariance matrix in estimating the true inverse covariance can be quantified via the Frobenius norm of the difference between the two. The reconstruction error is dominated by the smallest nonzero sample covariance eigenvalues and diverges as the sample size becomes comparable to the number of features. For high-dimensional data, we use random matrix theory techniques and results to study the reconstruction error for a wide class of population covariance matrices. We also show how bagging and random subspace methods can result in a reduction in the reconstruction error and can be combined to improve the accuracy of classifiers that utilize the pseudo-inverse sample covariance matrix. We test our analysis on both simulated and benchmark data sets.

  15. Understanding the accuracy of parental perceptions of child physical activity: a mixed methods analysis

    PubMed Central

    Kesten, Joanna M.; Jago, Russell; Sebire, Simon J.; Edwards, Mark J.; Pool, Laura; Zahra, Jesmond; Thompson, Janice L.

    2016-01-01

    Background Interventions to increase children’s physical activity (PA) have achieved limited success. This may be attributed to inaccurate parental perceptions of their children’s PA and a lack of recognition of a need to change activity levels. Methods Fifty-three parents participated in semi-structured interviews to determine perceptions of child PA. Perceptions were compared to children’s measured MVPA (classified as meeting or not meeting UK guidelines) to produce three categories: “accurate”, “over-estimate”, “under-estimate”. Deductive content analysis was performed to understand the accuracy of parental perceptions. Results All parents of children meeting the PA guidelines accurately perceived their child’s PA; whilst the majority of parents whose child did not meet the guidelines overestimated their PA. Most parents were unconcerned about their child’s PA level, viewing them as naturally active and willing to be active. Qualitative explanations for perceptions of insufficient activity included children having health problems and preferences for inactive pursuits, and parents having difficulty facilitating PA in poor weather and not always observing their child’s PA level. Social comparisons also influenced parental perceptions. Conclusions Strategies to improve parental awareness of child PA are needed. Perceptions of child PA may be informed by child “busyness”, being unaware of activity levels, and social comparisons. PMID:25872227

  16. Accuracy of bite mark analysis from food substances: A comparative study

    PubMed Central

    Daniel, M. Jonathan; Pazhani, Ambiga

    2015-01-01

    Aims and Objectives: The aims and objectives of the study were to compare the accuracy of bite mark analysis from three different food substances-apple, cheese and chocolate using two techniques-the manual docking procedure and computer assisted overlay generation technique and to compare the accuracy of the two techniques for bite mark analysis on food substances. Materials and Methods: The individuals who participated in the study were made to bite on three food substances-apple, cheese, and chocolate. Dentate individuals were included in the study. Edentulous individuals and individuals having a missing anterior tooth were excluded from the study. The dental casts of the individual were applied to the positive cast of the bitten food substance to determine docking or matching. Then, computer generated overlays were compared with bite mark pattern on the foodstuff. Results: The results were tabulated and the comparison of bite mark analysis on the three different food substances was analyzed by Kruskall-Wallis ANOVA test and the comparison of the two techniques was analyzed by Spearman's Rho correlation coefficient. Conclusion: On comparing the bite marks analysis from the three food substances-apple, cheese and chocolate, the accuracy was found to be greater for chocolate and cheese than apple. PMID:26816463

  17. Development of a Class of Smoothness-Increasing-Accuracy-Conserving (SIAC) Methods for Post-Processing Discontinuous Galerkin Solutions

    DTIC Science & Technology

    2013-07-01

    the theoretical extensions, pointwise error estimates demonstrating that higher-order accuracy of order 2k+2 –[d/2] is indeed achieved in the L∞-norm...estimates to the entire domain were also done. This was a significant extension as pointwise error estimates will be more useful for quantifying

  18. Accuracy and repeatability of Roentgen stereophotogrammetric analysis (RSA) for measuring knee laxity in longitudinal studies.

    PubMed

    Fleming, B C; Peura, G D; Abate, J A; Beynnon, B D

    2001-10-01

    Roentgen stereophotogrammetric analysis (RSA) can be used to assess temporal changes in anterior-posterior (A-P) knee laxity. However, the accuracy and precision of RSA is dependent on many factors and should be independently evaluated for a particular application. The objective of this study was to evaluate the use of RSA for measuring A-P knee laxity. The specific aims were to assess the variation or "noise" inherent to RSA, to determine the reproducibility of RSA for repeated A-P laxity testing, and to assess the accuracy of these measurements. Two experiments were performed. The first experiment utilized three rigid models of the tibiofemoral joint to assess the noise and to compare digitization errors of two independent examiners. No differences were found in the kinematic outputs of the RSA due to examiner, repeated trials, or the model used. In a second experiment, A-P laxity values between the A-P shear load limits of +/-60 N of five cadaver goat knees were measured to assess the error associated with repeated testing. The RSA laxity values were also compared to those obtained from a custom designed linkage system. The mean A-P laxity values with the knee 30 degrees, 60 degrees, and 90 degrees of flexion for the ACL-intact goat knee (+/-95% confidence interval) were 0.8 (+/-0.25), 0.9 (+/-0.29), and 0.4 (+/-0.22) mm, respectively. In the ACL-deficient knee, the A-P laxity values increased by an order of magnitude to 8.8 (+/-1.39), 7.6 (+/-1.32), and 3.1 (+/-1.20)mm, respectively. No significant differences were found between the A-P laxity values measured by RSA and the independent measurement technique. A highly significant linear relationship (r(2)=0.83) was also found between these techniques. This study suggests that the RSA method is an accurate and precise means to measure A-P knee laxity for repeated testing over time.

  19. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  20. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  1. Analysis and Improvement of Geo-Referencing Accuracy in Long Term Global AVHRR Data

    NASA Astrophysics Data System (ADS)

    Khlopenkov, K.; Minnis, P.

    2011-12-01

    Precise geolocation is one of the fundamental requirements for generating high-quality Advanced Very High Resolution Radiometer (AVHRR) Satellite Climate Data Record (SCDR) at 1-km spatial resolution for climate applications. The Global Climate Observing System (GCOS) and Committee on Earth Observing Satellites (CEOS) identified the requirement for the accuracy of geolocation of satellite data for climate applications as 1/3 field-of-view (FOV). This requirement for AVHRR series on the National Oceanic and Atmospheric Administration (NOAA) platforms cannot be met without implementing the ground control point (GCP) correction, especially for historical data, because of the limited accuracy of orbit models and uncertainty in the satellite attitude angles. This work presents a new analysis of the geo-referencing accuracy of global AVHRR data, that uses an automated image matching at pre-selected GCP locations. As a reference image, we have been using the clear-sky monthly composite imagery derived from Moderate Resolution Imaging Spectroradiometer (MODIS) MOD09 dataset at 250-m resolution. The image matching technique is applicable to processing not only the daytime observations from optical solar bands, but also the nighttime imagery by using the long wave thermal channels. The method includes the ortho-rectification to correct for surface elevation and achieves the sub-pixel accuracy in both along-scan and along-track directions. The produced image displacement map is then used to derive a correction to satellite clock error and the attitude angles. The statistics and pattern of these corrections have been analyzed for different NOAA Polar-orbiting satellites by using the HRPT, LAC, and GAC data sets. The application of the developed processing system showed that the algorithm achieved better than 1/3 FOV geolocation accuracy for most of AVHRR 1-km scenes. It has a high efficiency rate (over 97%) for global AVHRR data from NOAA-6 through NOAA-19.

  2. Accuracy and repeatability of an optical motion analysis system for measuring small deformations of biological tissues.

    PubMed

    Liu, Helen; Holt, Cathy; Evans, Sam

    2007-01-01

    Optical motion analysis techniques have been widely used in biomechanics for measuring large-scale motions such as gait, but have not yet been significantly explored for measuring smaller movements such as the tooth displacements under load. In principle, very accurate measurements could be possible and this could provide a valuable tool in many engineering applications. The aim of this study was to evaluate accuracy and repeatability of the Qualisys ProReflex-MCU120 system when measuring small displacements, as a step towards measuring tooth displacements to characterise the properties of the periodontal ligament. Accuracy and repeatability of the system was evaluated using a wedge comparator with a resolution of 0.25 microm to provide measured marker displacements in three orthogonal directions. The marker was moved in ten steps in each direction, for each of seven step sizes (0.5, 1, 2, 3, 5, 10, and 20 microm), repeated five times. Spherical and diamond markers were tested. The system accuracy (i.e. percentage of maximum absolute error in range/measurement range), in the 20-200 microm ranges, was +/-1.17%, +/-1.67% and +/-1.31% for the diamond marker in x, y and z directions, while the system accuracy for the spherical marker was +/-1.81%, +/-2.37% and +/-1.39%. The system repeatability (i.e. maximum standard deviation in the measurement range) measured under the different days, light intensity and temperatures for five times, carried out step up and then step down measurements for the same step size, was +/-1.7, +/-2.3 and +/-1.9 microm for the diamond marker, and +/-2.6, +/-3.9 and +/-1.9 microm for the spherical marker in x, y and z directions, respectively. These results demonstrate that the system suffices accuracy for measuring tooth displacements and could potentially be useful in many other applications.

  3. Accuracy analysis of CryoSat-2 SARIn mode data over Antarctica

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Bamber, Jonathan; Cheng, Xiao

    2015-04-01

    In 2010, CryoSat-2 was launched, carrying a unique satellite radar altimetry (SRA) instrument called SAR/Interferometric Radar Altimeter (SIRAL), with the aim of measuring and monitoring sea ice, ice sheets and mountain glaciers. The novel SAR Interferometric mode (SARInM) of CryoSat-2 is designed to improve the accuracy, resolution and geolocation of height measurements over the steeper margins of ice sheets and ice caps. Over these areas, it employs the synthetic aperture radar (SAR) capability to reduce the size of the footprint to effectively 450m along track and ~1km across track implemented from an airborne prototype originally termed a delay-Doppler altimeter. Additionally, CryoSat-2 used the phase difference between its two antennas to estimate surface slope in the across-track direction and identify the point of closed approach directly. The phase difference is 2pi for a surface slope of approximately 1deg. If the slope is above this threshold, the tracked surface in the returned waveform may be not the point of closed approach causing an error in slope correction. For this reason, the analysis was limited to slopes of 1deg or less in this study. We used extensive coverage of Antarctica provided by the ICESat laser altimeter mission between 2003 and 2009 to assess the accuracy of SARInM data. We corrected for changes in elevations due to the interval between the acquisition of the ICESat and CryoSat-2 data (from July 2010 and December 2013). Two methods were used: (1) the ICESat point was compared with a DEM derived from CryoSat-2 data (Point-to-DEM; PtoDEM), and (2) the ICESat point was compared with a CryoSat-2 point directly (Point-to-Point; PtoP). For PtoDEM, CryoSat-2 elevations were interpolated onto a regular 1km polar stereographic grid with a standard parallel of 71°S, using ordinary kriging. For PtoP, the maximum distance between a CryoSat-2 point location and ICESat point location was set to 35m. For the areas with slopes less than 0.2deg, the

  4. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials.

    PubMed

    Casans, Silvia; Rosado-Muñoz, Alfredo; Iakymchuk, Taras

    2016-12-14

    Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1 % accuracy for temperatures below 30 ∘ C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘ C for values below 10 GΩ and systematic deviations for higher values. Power supply V i applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘ C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa) was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves) for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, V i affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties.

  5. Novel Resistance Measurement Method: Analysis of Accuracy and Thermal Dependence with Applications in Fiber Materials

    PubMed Central

    Casans, Silvia; Rosado-Muñoz, Alfredo; Iakymchuk, Taras

    2016-01-01

    Material resistance is important since different physicochemical properties can be extracted from it. This work describes a novel resistance measurement method valid for a wide range of resistance values up to 100 GΩ at a low powered, small sized, digitally controlled and wireless communicated device. The analog and digital circuits of the design are described, analysing the main error sources affecting the accuracy. Accuracy and extended uncertainty are obtained for a pattern decade box, showing a maximum of 1% accuracy for temperatures below 30 ∘C in the range from 1 MΩ to 100 GΩ. Thermal analysis showed stability up to 50 ∘C for values below 10 GΩ and systematic deviations for higher values. Power supply Vi applied to the measurement probes is also analysed, showing no differences in case of the pattern decade box, except for resistance values above 10 GΩ and temperatures above 35 ∘C. To evaluate the circuit behaviour under fiber materials, an 11-day drying process in timber from four species (Oregon pine-Pseudotsuga menziesii, cedar-Cedrus atlantica, ash-Fraxinus excelsior, chestnut-Castanea sativa) was monitored. Results show that the circuit, as expected, provides different resistance values (they need individual conversion curves) for different species and the same ambient conditions. Additionally, it was found that, contrary to the decade box analysis, Vi affects the resistance value due to material properties. In summary, the proposed circuit is able to accurately measure material resistance that can be further related to material properties. PMID:27983652

  6. Local crystal structure analysis with 10-pm accuracy using scanning transmission electron microscopy.

    PubMed

    Saito, Mitsuhiro; Kimoto, Koji; Nagai, Takuro; Fukushima, Shun; Akahoshi, Daisuke; Kuwahara, Hideki; Matsui, Yoshio; Ishizuka, Kazuo

    2009-06-01

    We demonstrate local crystal structure analysis based on annular dark-field (ADF) imaging in scanning transmission electron microscopy (STEM). Using a stabilized STEM instrument and customized software, we first realize high accuracy of elemental discrimination and atom-position determination with a 10-pm-order accuracy, which can reveal major cation displacements associated with a variety of material properties, e.g. ferroelectricity and colossal magnetoresistivity. A-site ordered/disordered perovskite manganites Tb(0.5)Ba(0.5)MnO(3) are analysed; A-site ordering and a Mn-site displacement of 12 pm are detected in each specific atomic column. This method can be applied to practical and advanced materials, e.g. strongly correlated electron materials.

  7. Menu label accuracy at a university's foodservices. An exploratory recipe nutrition analysis.

    PubMed

    Feldman, Charles; Murray, Douglas; Chavarria, Stephanie; Zhao, Hang

    2015-09-01

    The increase in the weight of American adults and children has been positively associated with the prevalence of the consumption of food-away-from-home. The objective was to assess the accuracy of claimed nutritional information of foods purchased in contracted foodservices located on the campus of an institution of higher education. Fifty popular food items were randomly collected from five main dining outlets located on a selected campus in the northeastern United States. The sampling was repeated three times on separate occasions for an aggregate total of 150 food samples. The samples were then weighed and assessed for nutrient composition (protein, cholesterol, fiber, carbohydrates, total fat, calories, sugar, and sodium) using nutrient analysis software. Results were compared with foodservices' published nutrition information. Two group comparisons, claimed and measured, were performed using the paired-sample t-test. Descriptive statistics were used as well. Among the nine nutritional values, six nutrients (total fat, sodium, protein, fiber, cholesterol, and weight) had more than 10% positive average discrepancies between measured and claimed values. Statistical significance of the variance was obtained in four of the eight categories of nutrient content: total fat, sodium, protein, and cholesterol (P < .05). Significance was also reached in the variance of actual portion weight compared to the published claims (P < .001). Significant differences of portion size (weight), total fat, sodium, protein, and cholesterol were found among the sampled values and the foodservices' published claims. The findings from this study raise the concern that if the actual nutritional information does not accurately reflect the declared values on menus, conclusions, decisions and actions based on posted information may not be valid.

  8. Accuracy analysis of height difference models derived from terrestrial laser scanning point clouds

    NASA Astrophysics Data System (ADS)

    Glira, Philipp; Briese, Christian; Pfeifer, Norbert; Dusik, Jana; Hilger, Ludwig; Neugirg, Fabian; Baewert, Henning

    2014-05-01

    In many research areas the temporal development of the earth surface topography is investigated for geomorphological analysis (e.g. landslide monitoring). Terrestrial laser scanning (TLS) often is used for this purpose, as it allows a fast and detailed 3d reconstruction of the sampled object. The temporal development of the earth surface usually is investigated on the basis of rasterized data, i.e. digital terrain models (DTM). The difference between two DTMs - the difference model - should preferably correspond to the terrain height changes occurred between the measurement campaigns. Actually, these height differences can be influenced by numerous potential error sources. The height accuracy of each raster cell is affected primarily by (a) the measurement accuracy of the deployed TLS, (b) the terrain topography (e.g. roughness), (c) the registration accuracy, (d) the georeferencing accuracy and (e) the raster interpolation method. Thus, in this contribution, height differences are treated as stochastic variables in order to estimate their precision. For an accurate estimation of the height difference precision a detailed knowledge about the whole processing pipeline (from the raw point clouds to the final difference model) is essential. In this study, first the height difference precision is estimated by a rigorous error propagation. As main result, for each raster cell of the difference model, a corresponding height error is estimated, forming an error map. A statistical hypothesis test is presented in order to judge the significance of a height difference. Furthermore, in order to asses the effect of single factors on the final height difference precision, multivariate statistic methods are applied. This analysis allows the deduction of a simple error propagation model, neglecting error sources with small impact on the final precision. The proposed method is demonstrated by means of TLS data acquired at the Gepatschferner (Tyrol, Austria). This study was carried

  9. A newly developed peripheral anterior chamber depth analysis system: principle, accuracy, and reproducibility

    PubMed Central

    Kashiwagi, K; Kashiwagi, F; Toda, Y; Osada, K; Tsumura, T; Tsukahara, S

    2004-01-01

    Aim: To develop a new, non-contact system for measuring anterior chamber depth (ACD) quantitatively, and to investigate its accuracy as well as interobserver and intraobserver reproducibility. Methods: The system scanned the ACD from the optical axis to the limbus in approximately 0.5 second and took 21 consecutive slit lamp images at 0.4 mm intervals. A computer installed program automatically evaluated the ACD, central corneal thickness (CT), and corneal radius of curvature (CRC) instantly. A dummy eye was used for investigating measurement accuracy. The effects of CT and CRC on the measurement results were examined using a computer simulation model to minimise measurement errors. Three examiners measured the ACD in 10 normal eyes, and interobserver and intraobserver reproducibility was analysed. Results: The ACD values measured by this system were very similar to theoretical values. Increase of CRC and decrease in CT decreased ACD and vice versa. Data calibration using evaluated CT and CRC successfully reduced measurement errors. Intraobserver and interobserver variations were small. Their coefficient variation values were 7.4% (SD 2.3%) and 6.7% (0.7%), and these values tended to increase along the distance from the optical axis. Conclusion: The current system can measure ACD with high accuracy as well as high intraobserver and interobserver reproducibility. It has potential use in measuring ACD quantitatively and screening subjects with narrow angle. PMID:15258020

  10. On the increase of geometric accuracy with the help of stiffening elements for robot-based incremental sheet metal forming

    NASA Astrophysics Data System (ADS)

    Thyssen, Lars; Seim, Patrick; Störkle, Denis D.; Kuhlenkötter, Bernd

    2016-10-01

    This paper describes new developments in an incremental, robot-based sheet metal forming process (`Roboforming') for the production of sheet metal components for small lot sizes and prototypes. The incremental sheet forming (ISF) offers high geometrical form flexibility without the need of any part-dependent tools. To transfer the ISF to industrial applications, it is necessary to respond to the still existing constraints, e.g. the low geometrical accuracy. Especially the subsequent deformation resulting from the interaction of differently shaped elements causes geometrical deviations, which are limiting the scope of formable parts. The impact of the resulting forming forces will vary according to the shape of the individual elements. For this, the paper proposes and examines a new approach to stabilize the geometrical accuracy without losing the universal approach of Roboforming by inserting stiffening elements. Those elements with varying cross-sections at the initial area of various orientations must be examined on their stabilizing or subsequent distorting impact. Especially the different impacts of the subsequent forming of stiffness features in contrast to the direct forming are studied precisely.

  11. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  12. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  13. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  14. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  15. Increasing the Accuracy of Reading Decoding Skills Exhibited by Hearing-Impaired Students with the Use of a Sound/Letter Unit Instructional Approach.

    ERIC Educational Resources Information Center

    Becker, Katharine E.

    This practicum was designed to increase the accuracy of reading decoding skills exhibited by five elementary and intermediate level hearing-impaired students in a mainstream setting. Subjects were fitted with appropriate amplification to optimize their residual hearing but were performing below their grade-level placement in the areas of word…

  16. Accuracy of surface tension measurement from drop shapes: the role of image analysis.

    PubMed

    Kalantarian, Ali; Saad, Sameh M I; Neumann, A Wilhelm

    2013-11-01

    Axisymmetric Drop Shape Analysis (ADSA) has been extensively used for surface tension measurement. In essence, ADSA works by matching a theoretical profile of the drop to the extracted experimental profile, taking surface tension as an adjustable parameter. Of the three main building blocks of ADSA, i.e. edge detection, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure, only edge detection (that extracts the drop profile line from the drop image) needs extensive study. For the purpose of this article, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure will only require a minor effort. It is the aim of this paper to investigate how far the surface tension accuracy of drop shape techniques can be pushed by fine tuning and optimizing edge detection strategies for a given drop image. Two different aspects of edge detection are pursued here: sub-pixel resolution and pixel resolution. The effect of two sub-pixel resolution strategies, i.e. spline and sigmoid, on the accuracy of surface tension measurement is investigated. It is found that the number of pixel points in the fitting procedure of the sub-pixel resolution techniques is crucial, and its value should be determined based on the contrast of the image, i.e. the gray level difference between the drop and the background. On the pixel resolution side, two suitable and reliable edge detectors, i.e. Canny and SUSAN, are explored, and the effect of user-specified parameters of the edge detector on the accuracy of surface tension measurement is scrutinized. Based on the contrast of the image, an optimum value of the user-specified parameter of the edge detector, SUSAN, is suggested. Overall, an accuracy of 0.01mJ/m(2) is achievable for the surface tension determination by careful fine tuning of edge detection algorithms.

  17. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  18. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  19. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-09-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  20. The accuracy of approximate solutions in the analysis of fracture of composites

    NASA Technical Reports Server (NTRS)

    Goree, J. G.

    1985-01-01

    This paper concerns the accuracy of three related mathematical models (developed by Hedgepeth, Eringen and Sendeckyj and Jones) used in the stress analysis and in fracture studies of continuous-fiber composites. These models have particular application in the investigation of fiber and matrix stresses in unidirectional composites in the region near a crack tip. The interest in such models is motivated by the desire to be able to simplify the equations of elasticity to the point that they can be solved in a relatively easy manner.

  1. Accuracy and sensitivity analysis of the conical null-screen based corneal topographer

    NASA Astrophysics Data System (ADS)

    Cossio-Guerrero, Cesar; Campos-García, Manuel

    2016-09-01

    In every optical testing method, the time taken to process data, the precision of the results and the sensitivity are among the most relevant aspects to be taken into account when the viability of its implementation is been under consideration. An accuracy and sensitivity analysis of a topographer based on a conical null-screen with a semi-radial distribution of targets is presented. On the other hand, we proposed a custom evaluation algorithm in order to reduce the time in the calculation of the normal to the corneal surface. Finally, we perform some corneal topographical measurements.

  2. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  3. Diagnostic accuracy of the International HIV Dementia Scale and HIV Dementia Scale: A meta-analysis.

    PubMed

    Hu, Xueying; Zhou, Yang; Long, Jianxiong; Feng, Qiming; Wang, Rensheng; Su, Li; Zhao, Tingting; Wei, Bo

    2012-10-01

    This aim of this study was to assess the diagnostic accuracy of the International HIV Dementia Scale (IHDS) or HIV Dementia Scale (HDS) for the diagnosis of HIV-associated neurocognitive disorders (HAND). A comprehensive and systematic search was carried out in PubMed and EMBASE databases. Sensitivity, specificity, Q(*)-values, summary receiver operating characteristic curves and other measures of accuracy of IHDS or HDS in the diagnosis of HAND were summarized. Summary receiver operator characteristic (SROC) curve analysis for HAND data demonstrates a pooled sensitivity of 0.90 [95% confidence interval (CI), 0.88-0.91] and overall specificity of 0.96 (95% CI, 0.95-0.97) for IHDS, the Q(*)-value for IHDS was 0.9195 and the diagnostic odds ratio (DOR) was 162.28 (95% CI, 91.82-286.81). HDS had an overall sensitivity of 0.39 (95% CI, 0.34-0.43) and specificity of 0.90 (95% CI, 0.89-0.91), the Q(*)-value for HDS was 0.6321 and DOR was 5.81 (95% CI, 3.64-9.82). There was significant heterogeneity for studies that reported IHDS and HDS. This meta-analysis has shown that IHDS and HDS may offer high diagnostic performance accuracy for the detection of HAND in primary health care and resource-limited settings. IHDS and HDS may require reformed neuropsychological characterization of impairments in accordance with regional culture and language in future international studies.

  4. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  5. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  6. Influence of R wave analysis upon diagnostic accuracy of exercise testing in women.

    PubMed Central

    Ilsley, C; Canepa-Anson, R; Westgate, C; Webb, S; Rickards, A; Poole-Wilson, P

    1982-01-01

    Exercise electrocardiography in women with chest pain is associated with a high incidence of false positive ST segment depression. The recent observation that changes in R wave amplitude during exercise can also be used diagnostically may improve the value of stress testing in women. The results of 12 lead treadmill exercise and coronary angiography were reviewed in 62 women, mean age 51 years, presenting with "angina" without previous myocardial infarction. These were compared with exercise results in 14 healthy asymptomatic volunteers with a mean age of 26 years. In addition to conventional ST analysis, R wave amplitude changes during exercise, measured in leads II, III, a VF, and V4 to 6, were examined. While the sensitivity and specificity of ST and R wave changes were similar at about 67%, their combined interpretation was helpful. If both ST and R wave criteria were negative the predictive accuracy for normal coronary angiography was 94% (17/18). Alternatively, in tests showing both ST depression and an abnormal R wave response, coronary angiography was always abnormal (13/13). None of the normal volunteers developed ST segment depression and 93% (13/14) had a normal R wave response. If both were positive, however, coronary angiography was always abnormal (13/13). Although stress test interpretation in women is difficult, R wave analysis is a useful adjunct to ST change and can improve the predictive accuracy of the test in a significant number of patients. PMID:7093085

  7. Increasing Transparency Through a Multiverse Analysis.

    PubMed

    Steegen, Sara; Tuerlinckx, Francis; Gelman, Andrew; Vanpaemel, Wolf

    2016-09-01

    Empirical research inevitably includes constructing a data set by processing raw data into a form ready for statistical analysis. Data processing often involves choices among several reasonable options for excluding, transforming, and coding data. We suggest that instead of performing only one analysis, researchers could perform a multiverse analysis, which involves performing all analyses across the whole set of alternatively processed data sets corresponding to a large set of reasonable scenarios. Using an example focusing on the effect of fertility on religiosity and political attitudes, we show that analyzing a single data set can be misleading and propose a multiverse analysis as an alternative practice. A multiverse analysis offers an idea of how much the conclusions change because of arbitrary choices in data construction and gives pointers as to which choices are most consequential in the fragility of the result.

  8. Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  9. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  10. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  11. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  12. Analysis of Bradley Fighting Vehicle Gunnery with Emphasis on Factors Affecting First-Round Accuracy of the 25-mm Gun

    DTIC Science & Technology

    1987-12-01

    ARI Research Note 87-67 OR FILE COPY ANALYSIS OF BRADLEY FIGHTING VEHICLE GUNNERY WITH EMPHASIS ON FACTORS AFFECTING FIRST-ROUND ACCURACY OF THE 25...of Bradley Fighting Vehicle Gunnery Jinal Deeport 1 with Emphasis on Factors Affecting First-Round January - December 1985 Accuracy of the 25-mm Gun 6...Determination Preliminary Gunnery Bradley Fighting Vehicle Zeroing Procedures -_Analysis of the problems and potential improvements in gunnery effectiveness

  13. Accuracy analysis of a mobile tracking system for angular position determination of flying targets

    NASA Astrophysics Data System (ADS)

    Walther, Andreas; Buske, Ivo; Riede, Wolfgang

    2016-10-01

    Lasers arouse an increasing interest in remote sensing applications. In order to deliver as much as possible of the available laser power onto a flying object the subsystems of a beam control system have to operate precisely together. One important subsystem is responsible for determination of the target's angular position. Here, we focus on an optical system for measuring precisely the angular position of flying objects. We designed this subunit of a beam control system exclusively from readily available commercial-off-the-shelf components. Two industrial cameras were used for angle measuring and for guiding the system to the position of the flying object. Both cameras are mounted on a modified astronomical mount with high-precision angle encoders. To achieve a high accuracy we temporally synchronize the acquisition of the angle from the pan tilt unit with the exposure of the camera. Therefore, a FPGA-based readout device for the rotary encoders was designed and implemented. Additionally, we determined and evaluated the influence of the distortion of the lenses to the measurement. We investigated various scenarios to determine the accuracy and the limitations of our system for angular position determination of flying targets. Performance tests were taken indoor and outdoor at our test sites. A target can be mounted on a fast moving linear stage. The position of this linear stage is continuously read out by a high resolution encoder so we know the target's position with a dynamic accuracy in the range of a few μm. With this setup we evaluated the spatial resolution of our tracking system. We showed that the presented system can determine the angular position of fast flying objects with an uncertainty of only 2 μrad RMS. With this mobile tracking system for angular position determination of flying targets we designed an accurate cost-efficient opportunity for further developments.

  14. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  15. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  16. Accuracy and repeatability of the gait analysis by the WalkinSense system.

    PubMed

    de Castro, Marcelo P; Meucci, Marco; Soares, Denise P; Fonseca, Pedro; Borgonovo-Santos, Márcio; Sousa, Filipa; Machado, Leandro; Vilas-Boas, João Paulo

    2014-01-01

    WalkinSense is a new device designed to monitor walking. The aim of this study was to measure the accuracy and repeatability of the gait analysis performed by the WalkinSense system. Descriptions of values recorded by WalkinSense depicting typical gait in adults are also presented. A bench experiment using the Trublu calibration device was conducted to statically test the WalkinSense. Following this, a dynamic test was carried out overlapping the WalkinSense and the Pedar insoles in 40 healthy participants during walking. Pressure peak, pressure peak time, pressure-time integral, and mean pressure at eight-foot regions were calculated. In the bench experiments, the repeatability (i) among the WalkinSense sensors (within), (ii) between two WalkinSense devices, and (iii) between the WalkinSense and the Trublu devices was excellent. In the dynamic tests, the repeatability of the WalkinSense (i) between stances in the same trial (within-trial) and (ii) between trials was also excellent (ICC > 0.90). When the eight-foot regions were analyzed separately, the within-trial and between-trials repeatability was good-to-excellent in 88% (ICC > 0.80) of the data and fair in 11%. In short, the data suggest that the WalkinSense has good-to-excellent levels of accuracy and repeatability for plantar pressure variables.

  17. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis

    PubMed Central

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-01-01

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling’s proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77–172, I2 = 12.0%) and the AUC was 0.970 (95%CI: 0.958–0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871–0.940) and 0.912 (95%CI: 0.892–0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4–12.7) and 0.098 (95%CI 0.066–0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI. PMID:27418431

  18. modern global models of the earth's gravity field: analysis of their accuracy and resolution

    NASA Astrophysics Data System (ADS)

    Ganagina, Irina; Karpik, Alexander; Kanushin, Vadim; Goldobin, Denis; Kosareva, Alexandra; Kosarev, Nikolay; Mazurova, Elena

    2015-04-01

    Introduction: Accurate knowledge of the fine structure of the Earth's gravity field extends opportunities in geodynamic problem-solving and high-precision navigation. In the course of our investigations have been analyzed the resolution and accuracy of 33 modern global models of the Earth's gravity field and among them 23 combined models and 10 satellite models obtained by the results of GOCE, GRACE, and CHAMP satellite gravity mission. The Earth's geopotential model data in terms of normalized spherical harmonic coefficients were taken from the web-site of the International Centre for Global Earth Models (ICGEM) in Potsdam. Theory: Accuracy and resolution estimation of global Earth's gravity field models is based on the analysis of degree variances of geopotential coefficients and their errors. During investigations for analyzing models were obtained dependences of approximation errors for gravity anomalies on the spherical harmonic expansion of the geopotential, relative errors of geopotential's spherical harmonic coefficients, degree variances for geopotential coefficients, and error variances of potential coefficients obtained from gravity anomalies. Delphi 7-based software developed by authors was used for the analysis of global Earth's gravity field models. Experience: The results of investigations show that spherical harmonic coefficients of all matched. Diagrams of degree variances for spherical harmonic coefficients and their errors bring us to the conclusion that the degree variances of most models equal to their error variances for a degree less than that declared by developers. The accuracy of normalized spherical harmonic coefficients of geopotential models is estimated as 10-9. This value characterizes both inherent errors of models, and the difference of coefficients in various models, as well as a scale poor predicted instability of the geopotential, and resolution. Furthermore, we compared the gravity anomalies computed by models with those

  19. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  20. Accuracy in interpersonal expectations: a reflection-construction analysis of current and classic research.

    PubMed

    Jussim, L

    1993-12-01

    Research and theory on interpersonal expectations have been dominated by a strong social constructivist perspective arguing that expectancies are often inaccurate and a major force in the creation of social reality. The reflection-construction model is an attempt to examine these strong claims conceptually and empirically. This model assumes that social perception includes both constructivist phenomena and accuracy. When this model is used as a framework for interpreting research on teacher expectations and on the role of stereotypes in person perception, it shows that interpersonal expectancies are often accurate, and usually lead only to relatively small biases and self-fulfilling prophecies. The model also is used to interpret research on expectancies that has provided some of the foundations for the strong constructivist perspective. This reflection-construction analysis shows that even those studies strongly suggest that people's expectations generally will be highly accurate.

  1. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  2. Stringent mating-type-regulated auxotrophy increases the accuracy of systematic genetic interaction screens with Saccharomyces cerevisiae mutant arrays.

    PubMed

    Singh, Indira; Pass, Rebecca; Togay, Sine Ozmen; Rodgers, John W; Hartman, John L

    2009-01-01

    A genomic collection of haploid Saccharomyces cerevisiae deletion strains provides a unique resource for systematic analysis of gene interactions. Double-mutant haploid strains can be constructed by the synthetic genetic array (SGA) method, wherein a query mutation is introduced by mating to mutant arrays, selection of diploid double mutants, induction of meiosis, and selection of recombinant haploid double-mutant progeny. The mechanism of haploid selection is mating-type-regulated auxotrophy (MRA), by which prototrophy is restricted to a particular haploid genotype generated only as a result of meiosis. MRA escape leads to false-negative genetic interaction results because postmeiotic haploids that are supposed to be under negative selection instead proliferate and mate, forming diploids that are heterozygous at interacting loci, masking phenotypes that would be observed in a pure haploid double-mutant culture. This work identified factors that reduce MRA escape, including insertion of terminator and repressor sequences upstream of the MRA cassette, deletion of silent mating-type loci, and utilization of alpha-type instead of a-type MRA. Modifications engineered to reduce haploid MRA escape reduced false negative results in SGA-type analysis, resulting in >95% sensitivity for detecting gene-gene interactions.

  3. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  4. The Accuracy of Computerized Adaptive Testing in Heterogeneous Populations: A Mixture Item-Response Theory Analysis

    PubMed Central

    Kopec, Jacek A.; Wu, Amery D.; Zumbo, Bruno D.

    2016-01-01

    Background Computerized adaptive testing (CAT) utilizes latent variable measurement model parameters that are typically assumed to be equivalently applicable to all people. Biased latent variable scores may be obtained in samples that are heterogeneous with respect to a specified measurement model. We examined the implications of sample heterogeneity with respect to CAT-predicted patient-reported outcomes (PRO) scores for the measurement of pain. Methods A latent variable mixture modeling (LVMM) analysis was conducted using data collected from a heterogeneous sample of people in British Columbia, Canada, who were administered the 36 pain domain items of the CAT-5D-QOL. The fitted LVMM was then used to produce data for a simulation analysis. We evaluated bias by comparing the referent PRO scores of the LVMM with PRO scores predicted by a “conventional” CAT (ignoring heterogeneity) and a LVMM-based “mixture” CAT (accommodating heterogeneity). Results The LVMM analysis indicated support for three latent classes with class proportions of 0.25, 0.30 and 0.45, which suggests that the sample was heterogeneous. The simulation analyses revealed differences between the referent PRO scores and the PRO scores produced by the “conventional” CAT. The “mixture” CAT produced PRO scores that were nearly equivalent to the referent scores. Conclusion Bias in PRO scores based on latent variable models may result when population heterogeneity is ignored. Improved accuracy could be obtained by using CATs that are parameterized using LVMM. PMID:26930348

  5. Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units.

    PubMed

    Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark

    2012-05-21

    As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system

  6. Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units

    NASA Astrophysics Data System (ADS)

    Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark

    2012-05-01

    As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system

  7. Summary of Glaucoma Diagnostic Testing Accuracy: An Evidence-Based Meta-Analysis

    PubMed Central

    Ahmed, Saad; Khan, Zainab; Si, Francie; Mao, Alex; Pan, Irene; Yazdi, Fatemeh; Tsertsvadze, Alexander; Hutnik, Cindy; Moher, David; Tingey, David; Trope, Graham E.; Damji, Karim F.; Tarride, Jean-Eric; Goeree, Ron; Hodge, William

    2016-01-01

    Background New glaucoma diagnostic technologies are penetrating clinical care and are changing rapidly. Having a systematic review of these technologies will help clinicians and decision makers and help identify gaps that need to be addressed. This systematic review studied five glaucoma technologies compared to the gold standard of white on white perimetry for glaucoma detection. Methods OVID® interface: MEDLINE® (In-Process & Other Non-Indexed Citations), EMBASE®, BIOSIS Previews®, CINAHL®, PubMed, and the Cochrane Library were searched. A gray literature search was also performed. A technical expert panel, information specialists, systematic review method experts and biostatisticians were used. A PRISMA flow diagram was created and a random effect meta-analysis was performed. Results A total of 2,474 articles were screened. The greatest accuracy was found with frequency doubling technology (FDT) (diagnostic odds ratio (DOR): 57.7) followed by blue on yellow perimetry (DOR: 46.7), optical coherence tomography (OCT) (DOR: 41.8), GDx (DOR: 32.4) and Heidelberg retina tomography (HRT) (DOR: 17.8). Of greatest concern is that tests for heterogeneity were all above 50%, indicating that cutoffs used in these newer technologies were all very varied and not uniform across studies. Conclusions Glaucoma content experts need to establish uniform cutoffs for these newer technologies, so that studies that compare these technologies can be interpreted more uniformly. Nevertheless, synthesized data at this time demonstrate that amongst the newest technologies, OCT has the highest glaucoma diagnostic accuracy followed by GDx and then HRT. PMID:27540437

  8. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  9. Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit

    NASA Astrophysics Data System (ADS)

    Sikharulidze, Yu. G.; Korchagin, A. N.

    2002-01-01

    The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.

  10. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  11. Accuracy in Rietveld quantitative phase analysis: a comparative study of strictly monochromatic Mo and Cu radiations.

    PubMed

    León-Reina, L; García-Maté, M; Álvarez-Pinazo, G; Santacruz, I; Vallcorba, O; De la Torre, A G; Aranda, M A G

    2016-06-01

    This study reports 78 Rietveld quantitative phase analyses using Cu Kα1, Mo Kα1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo Kα1 radiation have slightly better accuracies than those obtained from Cu Kα1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%.

  12. The requirements for the future e-beam mask writer: statistical analysis of pattern accuracy

    NASA Astrophysics Data System (ADS)

    Lee, Sang Hee; Choi, Jin; Kim, Hee Bom; Kim, Byung Gook; Cho, Han-Ku

    2011-11-01

    As semiconductor features shrink in size and pitch, the extreme control of CD uniformity, MTT and image placement is needed for mask fabrication with e-beam lithography. Among the many sources of CD and image placement error, the error resulting from e-beam mask writer becomes more important than before. CD and positioning error by e-beam mask writer is mainly related to the imperfection of e-beam deflection accuracy in optic system and the charging and contamination of column. To avoid these errors, the e-beam mask writer should be designed taking into account for these effects. However, the writing speed is considered for machine design with the highest priority, because the e-beam shot count is increased rapidly due to design shrink and aggressive OPC. The increment of shot count can make the pattern shift problem due to statistical issue resulting from e-beam deflection error and the total shot count in layout. And it affects the quality of CD and image placement too. In this report, the statistical approach on CD and image placement error caused by e-beam shot position error is presented. It is estimated for various writing conditions including the intrinsic e-beam positioning error of VSB writer. From the simulation study, the required e-beam shot position accuracy to avoid pattern shift problem in 22nm node and beyond is estimated taking into account for total shot count. And the required local CD uniformity is calculated for various e-beam writing conditions. The image placement error is also simulated for various conditions including e-beam writing field position error. Consequently, the requirements for the future e-beam mask writer and the writing conditions are discussed. And in terms of e-beam shot noise, LER caused by exposure dose and shot position error is studied for future e-beam mask writing for 22nm node and beyond.

  13. Treatment planning using MRI data: an analysis of the dose calculation accuracy for different treatment regions

    PubMed Central

    2010-01-01

    Background Because of superior soft tissue contrast, the use of magnetic resonance imaging (MRI) as a complement to computed tomography (CT) in the target definition procedure for radiotherapy is increasing. To keep the workflow simple and cost effective and to reduce patient dose, it is natural to strive for a treatment planning procedure based entirely on MRI. In the present study, we investigate the dose calculation accuracy for different treatment regions when using bulk density assignments on MRI data and compare it to treatment planning that uses CT data. Methods MR and CT data were collected retrospectively for 40 patients with prostate, lung, head and neck, or brain cancers. Comparisons were made between calculations on CT data with and without inhomogeneity corrections and on MRI or CT data with bulk density assignments. The bulk densities were assigned using manual segmentation of tissue, bone, lung, and air cavities. Results The deviations between calculations on CT data with inhomogeneity correction and on bulk density assigned MR data were small. The maximum difference in the number of monitor units required to reach the prescribed dose was 1.6%. This result also includes effects of possible geometrical distortions. Conclusions The dose calculation accuracy at the investigated treatment sites is not significantly compromised when using MRI data when adequate bulk density assignments are made. With respect to treatment planning, MRI can replace CT in all steps of the treatment workflow, reducing the radiation exposure to the patient, removing any systematic registration errors that may occur when combining MR and CT, and decreasing time and cost for the extra CT investigation. PMID:20591179

  14. Accuracy in Rietveld quantitative phase analysis: a comparative study of strictly monochromatic Mo and Cu radiations

    PubMed Central

    León-Reina, L.; García-Maté, M.; Álvarez-Pinazo, G.; Santacruz, I.; Vallcorba, O.; De la Torre, A. G.; Aranda, M. A. G.

    2016-01-01

    This study reports 78 Rietveld quantitative phase analyses using Cu Kα1, Mo Kα1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo Kα1 radiation have slightly better accuracies than those obtained from Cu Kα1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback–Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%. PMID:27275132

  15. Evaluation of factors affecting the accuracy of impressions using quantitative surface analysis.

    PubMed

    Lee, I K; DeLong, R; Pintado, M R; Malik, R

    1995-01-01

    Impression material goes from a plastic to an elastic state during setting. Movement of the impression and excessive seating pressure during this transition can cause distortion in the impressions. The purpose of this study is to determine if the impression distortion is related to movement during setting or to distortion of the putty phase in the two-step impressioning technique. A master model of a maxillary quadrant of teeth was impressed using four different procedures: 1) one-step technique without movement (1S-NM); 2) one-step technique with movement (1S-M); 3) two-step technique without movement (2S-NM); and 4) two-step technique with movement (2S-M). An artificial oral environment and surface analysis technique of the Minnesota Dental Research Center for Biomaterials and Biomechanics were used to produce the impressions and measure their accuracy. A digitized image of the first premolar of the master model was aligned with a digitized image of the first premolar of each epoxy model using AnSur. The root mean squared difference (RMS) between the aligned images is a measure of the distortion. The corresponding RMS values for the different methods were: 1S-NM = 23.7 +/- 9.21; 1S-M = 20.4 +/- 3.9; 2S-NM = 20.5 +/- 7.7; 2S-M = 21.3 +/- 4.4. Statistical analysis using a two-way analysis of variance showed no difference at the 0.05 level of significance. Pairwise comparison using the Tukey method showed that neither technique (one-step vs two-step) nor movement is a significant factor. These results showed that low seating pressure will not cause any greater distortions in the two-step impression technique than in the one-step technique, and minor movement during the setting of the impression material will no cause distortion.

  16. Concordance analysis and diagnostic test accuracy review of IDH1 immunohistochemistry in glioblastoma.

    PubMed

    Pyo, Jung-Soo; Kim, Nae Yu; Kim, Roy Hyun Jai; Kang, Guhyun

    2016-10-01

    The study investigated isocitrate dehydrogenase (IDH) 1 immunohistochemistry (IHC) positive rate and concordance rate between IDH1 IHC and molecular test in glioblastoma. The current study included 1360 glioblastoma cases from sixteen eligible studies. Meta-analysis, including subgroup analysis by antibody clones and cut-off values, for IDH1 IHC positive rate was conducted. In addition, we performed a concordance analysis and diagnostic test accuracy review between IDH1 IHC and molecular tests. The estimated rates of IDH1 IHC were 0.106 [95 % confidence interval (CI) 0.085-0.132]. The IDH1 IHC positive rate of primary and secondary glioblastomas was 0.049 (95 % CI 0.023-0.99) and 0.729 (95 % CI 0.477-0.889), respectively. The overall concordance rate between IDH1 IHC and molecular test was 0.947 (95 % CI 0.878-0.978). In IDH1 IHC-positive and negative subgroups, the concordance rate was 0.842 (95 % CI 0.591-0.952) and 0.982 (95 % CI 0.941-0.995), respectively. The pooled sensitivity and specificity for IDH1 IHC were 1.00 (95 % CI 0.82-1.00) and 0.99 (95 % CI 0.96-1.00), respectively. IDH1 IHC is an accurate test for IDH1 mutation in glioblastoma patients. Further cumulative studies for evaluation criteria of IDH1 IHC will determine how to best apply this approach in daily practice.

  17. Measuring Speech Recognition Proficiency: A Psychometric Analysis of Speed and Accuracy

    ERIC Educational Resources Information Center

    Rader, Martha H.; Bailey, Glenn A.; Kurth, Linda A.

    2008-01-01

    This study examined the validity of various measures of speed and accuracy for assessing proficiency in speech recognition. The study specifically compared two different word-count indices for speed and accuracy (the 5-stroke word and the 1.4-syllable standard word) on a timing administered to 114 speech recognition students measured at 1-, 2-,…

  18. Accuracy analysis of direct georeferenced UAV images utilising low-cost navigation sensors

    NASA Astrophysics Data System (ADS)

    Briese, Christian; Wieser, Martin; Verhoeven, Geert; Glira, Philipp; Doneus, Michael; Pfeifer, Norbert

    2014-05-01

    control points should be used to improve the estimated values, especially to decrease the amount of systematic errors. For the bundle block adjustment the calibration of the camera and their temporal stability must be determined additionally. This contribution presents next to the theory a practical study on the accuracy analysis of direct georeferenced UAV imagery by low-cost navigation sensors. The analysis was carried out within the research project ARAP (automated (ortho)rectification of archaeological aerial photographs). The utilized UAS consists of the airplane "MAJA", manufactured by "Bormatec" (length: 1.2 m, wingspan: 2.2 m) equipped with the autopilot "ArduPilot Mega 2.5". For image acquisition the camera "Ricoh GR Digital IV" is utilised. The autopilot includes a GNSS receiver capable of DGPS (EGNOS), an inertial measurement system (INS), a barometer, and a magnetometer. In the study the achieved accuracies for the estimated position and orientation of the images are presented. The paper concludes with a summary of the remaining error sources and their possible corrections by applying further improvements on the utilised equipment and the direct georeferencing process.

  19. Analysis of Measurement Accuracy for Craniovertebral Junction Pathology : Most Reliable Method for Cephalometric Analysis

    PubMed Central

    Lee, Ho Jin; Kim, Il Sup; Kwon, Jae Yeol; Lee, Sang Won

    2013-01-01

    Objective This study was designed to determine the most reliable cephalometric measurement technique in the normal population and patients with basilar invagination (BI). Methods Twenty-two lateral radiographs of BI patients and 25 lateral cervical radiographs of the age, sex-matched normal population were selected and measured on two separate occasions by three spine surgeons using six different measurements. Statistical analysis including intraclass correlation coefficient (ICC) was carried out using the SPSS software (V. 12.0). Results Redlund-Johnell and Modified (M)-Ranawat had a highest ICC score in both the normal and BI groups in the inter-observer study. The M-Ranawat method (0.83) had a highest ICC score in the normal group, and the Redlund-Johenll method (0.80) had a highest ICC score in the BI group in the intra-observer test. The McGregor line had a lowest ICC score and a poor ICC grade in both groups in the intra-observer study. Generally, the measurement method using the odontoid process did not produce consistent results due to inter and intra-observer differences in determining the position of the odontoid tip. Opisthion and caudal point of the occipital midline curve are somewhat ambiguous landmarks, which induce variable ICC scores. Conclusion On the contrary to other studies, Ranawat method had a lower ICC score in the inter-observer study. C2 end-plate and C1 arch can be the most reliable anatomical landmarks. PMID:24294449

  20. The Efficacy of Written Corrective Feedback in Improving L2 Written Accuracy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Kang, EunYoung; Han, Zhaohong

    2015-01-01

    Written corrective feedback has been subject to increasing attention in recent years, in part because of the conceptual controversy surrounding it and in part because of its ubiquitous practice. This study takes a meta-analytic approach to synthesizing extant empirical research, including 21 primary studies. Guiding the analysis are two questions:…

  1. SU-E-J-37: Feasibility of Utilizing Carbon Fiducials to Increase Localization Accuracy of Lumpectomy Cavity for Partial Breast Irradiation

    SciTech Connect

    Zhang, Y; Hieken, T; Mutter, R; Park, S; Yan, E; Brinkmann, D; Pafundi, D

    2015-06-15

    Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers, followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios

  2. Accuracy of ionospheric models used in GNSS and SBAS: methodology and analysis

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, A.; Juan, J. M.; Sanz, J.; González-Casado, G.; Ibáñez, D.

    2016-03-01

    The characterization of the accuracy of ionospheric models currently used in global navigation satellite systems (GNSSs) is a long-standing issue. The characterization remains a challenging problem owing to the lack of sufficiently accurate slant ionospheric determinations to be used as a reference. The present study proposes a methodology based on the comparison of the predictions of any ionospheric model with actual unambiguous carrier-phase measurements from a global distribution of permanent receivers. The differences are separated as hardware delays (a receiver constant plus a satellite constant) per day. The present study was conducted for the entire year of 2014, i.e. during the last solar cycle maximum. The ionospheric models assessed are the operational models broadcast by the global positioning system (GPS) and Galileo constellations, the satellite-based augmentation system (SBAS) (i.e. European Geostationary Navigation Overlay System (EGNOS) and wide area augmentation system (WAAS)), a number of post-process global ionospheric maps (GIMs) from different International GNSS Service (IGS) analysis centres (ACs) and, finally, a more sophisticated GIM computed by the research group of Astronomy and GEomatics (gAGE). Ionospheric models based on GNSS data and represented on a grid (IGS GIMs or SBAS) correct about 85 % of the total slant ionospheric delay, whereas the models broadcasted in the navigation messages of GPS and Galileo only account for about 70 %. Our gAGE GIM is shown to correct 95 % of the delay. The proposed methodology appears to be a useful tool to improve current ionospheric models.

  3. Analysis of Current Position Determination Accuracy in Natural Resources Canada Precise Point Positioning Service

    NASA Astrophysics Data System (ADS)

    Krzan, Grzegorz; Dawidowicz, Karol; Krzysztof, Świaţek

    2013-09-01

    Precise Point Positioning (PPP) is a technique used to determine highprecision position with a single GNSS receiver. Unlike DGPS or RTK, satellite observations conducted by the PPP technique are not differentiated, therefore they require that parameter models should be used in data processing, such as satellite clock and orbit corrections. Apart from explaining the theory of the PPP technique, this paper describes the available web-based online services used in the post-processing of observation results. The results obtained in the post-processing of satellite observations at three points, with different characteristics of environment conditions, using the CSRS-PPP service, will be presented as the results of the experiment. This study examines the effect of the duration of the measurement session on the results and compares the results obtained by working out observations made by the GPS system and the combined observations from GPS and GLONASS. It also presents the analysis of the position determination accuracy using one and two measurement frequencies

  4. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  5. An analysis of the accuracy of wearable sensors for classifying the causes of falls in humans.

    PubMed

    Aziz, Omar; Robinovitch, Stephen N

    2011-12-01

    Falls are the number one cause of injury in older adults. Wearable sensors, typically consisting of accelerometers and/or gyroscopes, represent a promising technology for preventing and mitigating the effects of falls. At present, the goal of such "ambulatory fall monitors" is to detect the occurrence of a fall and alert care providers to this event. Future systems may also provide information on the causes and circumstances of falls, to aid clinical diagnosis and targeting of interventions. As a first step towards this goal, the objective of the current study was to develop and evaluate the accuracy of a wearable sensor system for determining the causes of falls. Sixteen young adults participated in experimental trials involving falls due to slips, trips, and "other" causes of imbalance. Three-dimensional acceleration data acquired during the falling trials were input to a linear discriminant analysis technique. This routine achieved 96% sensitivity and 98% specificity in distinguishing the causes of a falls using acceleration data from three markers (left ankle, right ankle, and sternum). In contrast, a single marker provided 54% sensitivity and two markers provided 89% sensitivity. These results indicate the utility of a three-node accelerometer array for distinguishing the cause of falls.

  6. The reliability, validity, and accuracy of self-reported absenteeism from work: a meta-analysis.

    PubMed

    Johns, Gary; Miraglia, Mariella

    2015-01-01

    Because of a variety of access limitations, self-reported absenteeism from work is often employed in research concerning health, organizational behavior, and economics, and it is ubiquitous in large scale population surveys in these domains. Several well established cognitive and social-motivational biases suggest that self-reports of absence will exhibit convergent validity with records-based measures but that people will tend to underreport the behavior. We used meta-analysis to summarize the reliability, validity, and accuracy of absence self-reports. The results suggested that self-reports of absenteeism offer adequate test-retest reliability and that they exhibit reasonably good rank order convergence with organizational records. However, people have a decided tendency to underreport their absenteeism, although such underreporting has decreased over time. Also, self-reports were more accurate when sickness absence rather than absence for any reason was probed. It is concluded that self-reported absenteeism might serve as a valid measure in some correlational research designs. However, when accurate knowledge of absolute absenteeism levels is essential, the tendency to underreport could result in flawed policy decisions.

  7. The psychology of intelligence analysis: drivers of prediction accuracy in world politics.

    PubMed

    Mellers, Barbara; Stone, Eric; Atanasov, Pavel; Rohrbaugh, Nick; Metz, S Emlen; Ungar, Lyle; Bishop, Michael M; Horowitz, Michael; Merkle, Ed; Tetlock, Philip

    2015-03-01

    This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.

  8. A novel method for crosstalk analysis of biological networks: improving accuracy of pathway annotation

    PubMed Central

    Ogris, Christoph; Guala, Dimitri; Helleday, Thomas; Sonnhammer, Erik L. L.

    2017-01-01

    Analyzing gene expression patterns is a mainstay to gain functional insights of biological systems. A plethora of tools exist to identify significant enrichment of pathways for a set of differentially expressed genes. Most tools analyze gene overlap between gene sets and are therefore severely hampered by the current state of pathway annotation, yet at the same time they run a high risk of false assignments. A way to improve both true positive and false positive rates (FPRs) is to use a functional association network and instead look for enrichment of network connections between gene sets. We present a new network crosstalk analysis method BinoX that determines the statistical significance of network link enrichment or depletion between gene sets, using the binomial distribution. This is a much more appropriate statistical model than previous methods have employed, and as a result BinoX yields substantially better true positive and FPRs than was possible before. A number of benchmarks were performed to assess the accuracy of BinoX and competing methods. We demonstrate examples of how BinoX finds many biologically meaningful pathway annotations for gene sets from cancer and other diseases, which are not found by other methods. BinoX is available at http://sonnhammer.org/BinoX. PMID:27664219

  9. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis

    PubMed Central

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  10. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  11. Accuracy of a remote quantitative image analysis in the whole slide images.

    PubMed

    Słodkowska, Janina; Markiewicz, Tomasz; Grala, Bartłomiej; Kozłowski, Wojciech; Papierz, Wielisław; Pleskacz, Katarzyna; Murawski, Piotr

    2011-03-30

    The rationale for choosing a remote quantitative method supporting a diagnostic decision requires some empirical studies and knowledge on scenarios including valid telepathology standards. The tumours of the central nervous system [CNS] are graded on the base of the morphological features and the Ki-67 labelling Index [Ki-67 LI]. Various methods have been applied for Ki-67 LI estimation. Recently we have introduced the Computerized Analysis of Medical Images [CAMI] software for an automated Ki-67 LI counting in the digital images. Aims of our study was to explore the accuracy and reliability of a remote assessment of Ki-67 LI with CAMI software applied to the whole slide images [WSI]. The WSI representing CNS tumours: 18 meningiomas and 10 oligodendrogliomas were stored on the server of the Warsaw University of Technology. The digital copies of entire glass slides were created automatically by the Aperio ScanScope CS with objective 20x or 40x. Aperio's Image Scope software provided functionality for a remote viewing of WSI. The Ki-67 LI assessment was carried on within 2 out of 20 selected fields of view (objective 40x) representing the highest labelling areas in each WSI. The Ki-67 LI counting was performed by 3 various methods: 1) the manual reading in the light microscope - LM, 2) the automated counting with CAMI software on the digital images - DI , and 3) the remote quantitation on the WSIs - as WSI method. The quality of WSIs and technical efficiency of the on-line system were analysed. The comparative statistical analysis was performed for the results obtained by 3 methods of Ki-67 LI counting. The preliminary analysis showed that in 18% of WSI the results of Ki-67 LI differed from those obtained in other 2 methods of counting when the quality of the glass slides was below the standard range. The results of our investigations indicate that the remote automated Ki-67 LI analysis performed with the CAMI algorithm on the whole slide images of meningiomas and

  12. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy: The Case of Neighbourhoods and Health

    PubMed Central

    Wagner, Philippe; Ghith, Nermin; Leckie, George

    2016-01-01

    Background and Aim Many multilevel logistic regression analyses of “neighbourhood and health” focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that distinguishes between “specific” (measures of association) and “general” (measures of variance) contextual effects. Performing two empirical examples we illustrate the methodology, interpret the results and discuss the implications of this kind of analysis in public health. Methods We analyse 43,291 individuals residing in 218 neighbourhoods in the city of Malmö, Sweden in 2006. We study two individual outcomes (psychotropic drug use and choice of private vs. public general practitioner, GP) for which the relative importance of neighbourhood as a source of individual variation differs substantially. In Step 1 of the analysis, we evaluate the OR and the area under the receiver operating characteristic (AUC) curve for individual-level covariates (i.e., age, sex and individual low income). In Step 2, we assess general contextual effects using the AUC. Finally, in Step 3 the OR for a specific neighbourhood characteristic (i.e., neighbourhood income) is interpreted jointly with the proportional change in variance (i.e., PCV) and the proportion of ORs in the opposite direction (POOR) statistics. Results For both outcomes, information on individual characteristics (Step 1) provide a low discriminatory accuracy (AUC = 0.616 for psychotropic drugs; = 0.600 for choosing a private GP). Accounting for neighbourhood of residence (Step 2) only improved the AUC for choosing a private GP (+0.295 units). High neighbourhood income (Step 3) was strongly associated to choosing a private GP (OR = 3.50) but the PCV was only 11% and the POOR 33%. Conclusion Applying an innovative stepwise multilevel analysis, we observed that, in Malmö, the neighbourhood context per se had a negligible

  13. On the automaticity and flexibility of covert attention: a speed-accuracy trade-off analysis.

    PubMed

    Giordano, Anna Marie; McElree, Brian; Carrasco, Marisa

    2009-03-31

    Exogenous covert attention improves discriminability and accelerates the rate of visual information processing (M. Carrasco & B. McElree, 2001). Here we investigated and compared the effects of both endogenous (sustained) and exogenous (transient) covert attention. Specifically, we directed attention via spatial cues and evaluated the automaticity and flexibility of exogenous and endogenous attention by manipulating cue validity in conjunction with a response-signal speed-accuracy trade-off (SAT) procedure, which provides conjoint measures of discriminability and information accrual. To investigate whether discriminability and rate of information processing differ as a function of cue validity (chance to 100%), we compared how both types of attention affect performance while keeping experimental conditions constant. With endogenous attention, both the observed benefits (valid-cue) and the costs (invalid-cue) increased with cue validity. However, with exogenous attention, the benefits and costs in both discriminability and processing speed were similar across cue validity conditions. These results provide compelling time-course evidence that whereas endogenous attention can be flexibly allocated according to cue validity, exogenous attention is automatic and unaffected by cue validity.

  14. The improvement of OPC accuracy and stability by the model parameters' analysis and optimization

    NASA Astrophysics Data System (ADS)

    Chung, No-Young; Choi, Woon-Hyuk; Lee, Sung-Ho; Kim, Sung-Il; Lee, Sun-Yong

    2007-10-01

    The OPC model is very critical in the sub 45nm device because the Critical Dimension Uniformity (CDU) is so tight to meet the device performance and the process window latitude for the production level. The OPC model is generally composed of an optical model and a resist model. Each of them has physical terms to be calculated without any wafer data and empirical terms to be fitted with real wafer data to make the optical modeling and the resist modeling. Empirical terms are usually related to the OPC accuracy, but are likely to be overestimated with the wafer data and so those terms can deteriorate OPC stability in case of being overestimated by a small cost function. Several physical terms have been used with ideal value in the optical property and even weren't be considered because those parameters didn't give a critical impact on the OPC accuracy, but these parameters become necessary to be applied to the OPC modeling at the low k1 process. Currently, real optic parameter instead of ideal optical parameter like the laser bandwidth, source map, pupil polarization including the phase and intensity difference start to be measured and those real measured value are used for the OPC modeling. These measured values can improve the model accuracy and stability. In the other hand these parameters can make the OPC model to overcorrect the process proximity errors without careful handling. The laser bandwidth, source map, pupil polarization, and focus centering for the optical modeling are analyzed and the sample data weight scheme and resist model terms are investigated, too. The image blurring by actual laser bandwidth in the exposure system is modeled and the modeling result shows that the extraction of the 2D patterns is necessary to get a reasonable result due to the 2D patterns' measurement noise in the SEM. The source map data from the exposure machine shows lots of horizontal and vertical intensity difference and this phenomenon must come from the measurement noise

  15. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms

    PubMed Central

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias

    2016-01-01

    Background and Aim Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. Methods We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Results Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. Conclusion The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts. PMID:27936091

  16. The influence of uncertainties of attitude sensors on attitude determination accuracy by linear covariance analysis

    NASA Astrophysics Data System (ADS)

    Blomqvist, J.; Fullmer, R.

    2010-04-01

    The idea that Linear Covariance techniques can be used to predict the accuracy of attitude determination systems and assist in their design is investigated. By using the sensor's estimated parameter accuracy, one could calculate the total standard deviation of the attitude determination that is resulting from these uncertainties by simple Root- Sum-Square of the attitude standard deviation resulting from the respective uncertainties. Generalized Matrix Laboratory (MATLAB) M-functions using this technique are written in order to provide a tool for estimating the attitude determination accuracy of a small spacecraft and to identify major contributions to the attitude determination uncertainty. This tool is applied to a satellite dynamics truth model developed in order to quantify the effects of sensor uncertainties on this particular spacecraft's attitude determination accuracy. The result of this study determines the standard deviation of the attitude determination as a function of the sensor uncertainties.

  17. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  18. The Accuracy of Recidivism Risk Assessments for Sexual Offenders: A Meta-Analysis of 118 Prediction Studies

    ERIC Educational Resources Information Center

    Hanson, R. Karl; Morton-Bourgon, Kelly E.

    2009-01-01

    This review compared the accuracy of various approaches to the prediction of recidivism among sexual offenders. On the basis of a meta-analysis of 536 findings drawn from 118 distinct samples (45,398 sexual offenders, 16 countries), empirically derived actuarial measures were more accurate than unstructured professional judgment for all outcomes…

  19. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    ERIC Educational Resources Information Center

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  20. Multinational assessment of accuracy of equations for predicting risk of kidney failure: a meta-analysis

    PubMed Central

    Tangri, Navdeep; Grams, Morgan E.; Levey, Andrew S.; Coresh, Josef; Appel, Lawrence; Astor, Brad C.; Chodick, Gabriel; Collins, Allan J.; Djurdjev, Ognjenka; Elley, C. Raina; Evans, Marie; Garg, Amit X.; Hallan, Stein I.; Inker, Lesley; Ito, Sadayoshi; Jee, Sun Ha; Kovesdy, Csaba P.; Kronenberg, Florian; Lambers Heerspink, Hiddo J.; Marks, Angharad; Nadkarni, Girish N.; Navaneethan, Sankar D.; Nelson, Robert G.; Titze, Stephanie; Sarnak, Mark J.; Stengel, Benedicte; Woodward, Mark; Iseki, Kunitoshi

    2016-01-01

    Importance Identifying patients at risk of chronic kidney disease (CKD) progression may facilitate more optimal nephrology care. Kidney failure risk equations (KFREs) were previously developed and validated in two Canadian cohorts. Validation in other regions and in CKD populations not under the care of a nephrologist is needed. Objective To evaluate the accuracy of the KFREs across different geographic regions and patient populations through individual-participant data meta-analysis. Data Sources Thirty-one cohorts, including 721,357 participants with CKD Stages 3–5 in over 30 countries spanning 4 continents, were studied. These cohorts collected data from 1982 through 2014. Study Selection Cohorts participating in the CKD Prognosis Consortium with data on end-stage renal disease. Data Extraction and Synthesis Data were obtained and statistical analyses were performed between July 2012 and June 2015. Using the risk factors from the original KFREs, cohort-specific hazard ratios were estimated, and combined in meta-analysis to form new “pooled” KFREs. Original and pooled equation performance was compared, and the need for regional calibration factors was assessed. Main Outcome and Measure Kidney failure (treatment by dialysis or kidney transplantation). Results During a median follow-up of 4 years, 23,829 cases of kidney failure were observed. The original KFREs achieved excellent discrimination (ability to differentiate those who developed kidney failure from those who did not) across all cohorts (overall C statistic, 0.90 (95% CI 0.89–0.92) at 2 years and 0.88 (95% CI 0.86–0.90) at 5 years); discrimination in subgroups by age, race, and diabetes status was similar. There was no improvement with the pooled equations. Calibration (the difference between observed and predicted risk) was adequate in North American cohorts, but the original KFREs overestimated risk in some non-North American cohorts. Addition of a calibration factor that lowered the baseline

  1. Analysis of the Ship Ops Model’s Accuracy in Predicting U.S. Naval Ship Operating Cost

    DTIC Science & Technology

    2003-06-01

    AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this report are those of the author(s) and do not reflect the official policy or...Dean Graduate School of Business and Public Policy iv THIS PAGE INTENTIONALLY LEFT BLANK v ANALYSIS OF THE SHIP OPS MODEL’S ACCURACY...of Model Accuracy using backcast : 1997-2002 Year SF SU SR SO Total 1997 $24,654 $4,315 $12,748 $6,626 $48,343 1998 $29,890 $5,853 $15,300 $9,046

  2. Limitations and strategies to improve measurement accuracy in differential pulse-width pair Brillouin optical time-domain analysis sensing.

    PubMed

    Minardo, Aldo; Bernini, Romeo; Zeni, Luigi

    2013-05-01

    In this work, we analyze the effects of Brillouin gain and Brillouin frequency drifts on the accuracy of the differential pulse-width pair Brillouin optical time-domain analysis (DPP-BOTDA). In particular, we demonstrate numerically that the differential gain is highly sensitive to variations in the Brillouin gain and/or Brillouin shift occurring during the acquisition process, especially when operating with a small pulse pair duration difference. We also propose and demonstrate experimentally a method to compensate for these drifts and consequently improve measurement accuracy.

  3. Taking time to feel our body: Steady increases in heartbeat perception accuracy and decreases in alexithymia over 9 months of contemplative mental training.

    PubMed

    Bornemann, Boris; Singer, Tania

    2017-03-01

    The ability to accurately perceive signals from the body has been shown to be important for physical and psychological health as well as understanding one's emotions. Despite the importance of this skill, often indexed by heartbeat perception accuracy (HBPa), little is known about its malleability. Here, we investigated whether contemplative mental practice can increase HBPa. In the context of a 9-month mental training study, the ReSource Project, two matched cohorts (n = 77 and n = 79) underwent three training modules of 3 months' duration that targeted attentional and interoceptive abilities (Presence module), socio-affective (Affect module), and socio-cognitive (Perspective module) abilities. A third cohort (n = 78) underwent 3 months of practice (Affect module) and a retest control group (n = 84) did not undergo any training. HBPa was measured with a heartbeat tracking task before and after each training module. Emotional awareness was measured by the Toronto Alexithymia Scale (TAS). Participants with TAS scores > 60 at screening were excluded. HBPa was found to increase steadily over the training, with significant and small- to medium-sized effects emerging after 6 months (Cohen's d = .173) and 9 months (d = .273) of mental training. Changes in HBPa were concomitant with and predictive of changes in emotional awareness. Our results suggest that HBPa can indeed be trained through intensive contemplative practice. The effect takes longer than the 8 weeks of typical mindfulness courses to reach meaningful magnitude. These increments in interoceptive accuracy and the related improvements in emotional awareness point to opportunities for improving physical and psychological health through contemplative mental training.

  4. Analysis: The Accuracy and Efficacy of the Dexcom G4 Platinum Continuous Glucose Monitoring System.

    PubMed

    van Beers, Cornelis A J; DeVries, J H

    2015-04-27

    In this issue of Journal of Diabetes Science and Technology, Nakamura and Balo report on accuracy and efficacy of the Dexcom G4 Platinum Continuous Glucose Monitoring System. The authors demonstrate good overall performance of this real-time continuous glucose monitoring (RT-CGM) system, although accuracy data of the next generation RT-CGM system, the G4AP, is already available. Also, now that MARDs seem to move to single-digit numbers, the question comes up how low we need to go with accuracy. Results of the study also showed a reduction in time spent in hypoglycemia, although the clinical relevance should be questioned. To date, few trials have demonstrated a reduction of severe hypoglycemia. Conventional RT-CGM, without threshold suspension or closing the loop, might be insufficient in preventing severe hypoglycemia.

  5. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  6. Effect of transportation and storage using sorbent tubes of exhaled breath samples on diagnostic accuracy of electronic nose analysis.

    PubMed

    van der Schee, M P; Fens, N; Brinkman, P; Bos, L D J; Angelo, M D; Nijsen, T M E; Raabe, R; Knobel, H H; Vink, T J; Sterk, P J

    2013-03-01

    Many (multi-centre) breath-analysis studies require transport and storage of samples. We aimed to test the effect of transportation and storage using sorbent tubes of exhaled breath samples for diagnostic accuracy of eNose and GC-MS analysis. As a reference standard for diagnostic accuracy, breath samples of asthmatic patients and healthy controls were analysed by three eNose devices. Samples were analysed by GC-MS and eNose after 1, 7 and 14 days of transportation and storage using sorbent tubes. The diagnostic accuracy for eNose and GC-MS after storage was compared to the reference standard. As a validation, the stability was assessed of 15 compounds known to be related to asthma, abundant in breath or related to sampling and analysis. The reference test discriminated asthma and healthy controls with a median AUC (range) of 0.77 (0.72-0.76). Similar accuracies were achieved at t1 (AUC eNose 0.78; GC-MS 0.84), t7 (AUC eNose 0.76; GC-MS 0.79) and t14 (AUC eNose 0.83; GC-MS 0.84). The GC-MS analysis of compounds showed an adequate stability for all 15 compounds during the 14 day period. Short-term transportation and storage using sorbent tubes of breath samples does not influence the diagnostic accuracy for discrimination between asthma and health by eNose and GC-MS.

  7. Theoretical study of precision and accuracy of strain analysis by nano-beam electron diffraction.

    PubMed

    Mahr, Christoph; Müller-Caspary, Knut; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Krause, Florian F; Zillmann, Dennis; Rosenauer, Andreas

    2015-11-01

    Measurement of lattice strain is important to characterize semiconductor nanostructures. As strain has large influence on the electronic band structure, methods for the measurement of strain with high precision, accuracy and spatial resolution in a large field of view are mandatory. In this paper we present a theoretical study of precision and accuracy of measurement of strain by convergent nano-beam electron diffraction. It is found that the accuracy of the evaluation suffers from halos in the diffraction pattern caused by a variation of strain within the area covered by the focussed electron beam. This effect, which is expected to be strong at sharp interfaces between materials with different lattice plane distances, will be discussed for convergent-beam electron diffraction patterns using a conventional probe and for patterns formed by a precessing electron beam. Furthermore, we discuss approaches to optimize the accuracy of strain measured at interfaces. The study is based on the evaluation of diffraction patterns simulated for different realistic structures that have been investigated experimentally in former publications. These simulations account for thermal diffuse scattering using the frozen-lattice approach and the modulation-transfer function of the image-recording system. The influence of Poisson noise is also investigated.

  8. An analysis of the accuracy of a parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Baram, Y.

    1974-01-01

    The numerical operations involved in a currently used optimization technique are discussed and analyzed with special attention to the numerical accuracy. Alternative methods for deriving linear system transfer functions, finding the relationships between the transfer function coefficients and the design parameters, and solving a matrix equation are presented for more accurate and cost effective solutions.

  9. An Information-Processing Analysis of Children's Accuracy in Predicting the Appearance of Rotated Stimuli.

    ERIC Educational Resources Information Center

    Rosser, Rosemary A.; And Others

    1984-01-01

    The ability of 40 children four and five years of age to discriminate reflections and rotations of visual stimuli was examined in a kinetic imagery task. Results revealed that prediction accuracy was associated with the existence of orientation markers on the stimuli, as well as age, sex, type of discrimination, and several interactions among the…

  10. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  11. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  12. Accuracy Analysis of a Robotic Radionuclide Inspection and Mapping System for Surface Contamination

    SciTech Connect

    Mauer, Georg F.; Kawa, Chris

    2008-01-15

    The mapping of localized regions of radionuclide contamination in a building can be a time consuming and costly task. Humans moving hand-held radiation detectors over the target areas are subject to fatigue. A contamination map based on manual surveys can contain significant operator-induced inaccuracies. A Fanuc M16i light industrial robot has been configured for installation on a mobile aerial work platform, such as a tall forklift. When positioned in front of a wall or floor surface, the robot can map the radiation levels over a surface area of up to 3 m by 3 m. The robot's end effector is a commercial alpha-beta radiation sensor, augmented with range and collision avoidance sensors to ensure operational safety as well as to maintain a constant gap between surface and radiation sensors. The accuracy and repeatability of the robotically conducted contamination surveys is directly influenced by the sensors and other hardware employed. This paper presents an in-depth analysis of various non-contact sensors for gap measurement, and the means to compensate for predicted systematic errors that arise during the area survey scans. The range sensor should maintain a constant gap between the radiation counter and the surface being inspected. The inspection robot scans the wall surface horizontally, moving down at predefined vertical intervals after each scan in a meandering pattern. A number of non-contact range sensors can be employed for the measurement of the gap between the robot end effector and the wall. The nominal gap width was specified as 10 mm, with variations during a single scan not to exceed {+-} 2 mm. Unfinished masonry or concrete walls typically exhibit irregularities, such as holes, gaps, or indentations in mortar joints. These irregularities can be sufficiently large to indicate a change of the wall contour. The responses of different sensor types to the wall irregularities vary, depending on their underlying principles of operation. We explored

  13. Multivariate meta-analysis with an increasing number of parameters.

    PubMed

    Boca, Simina M; Pfeiffer, Ruth M; Sampson, Joshua N

    2017-02-14

    Meta-analysis can average estimates of multiple parameters, such as a treatment's effect on multiple outcomes, across studies. Univariate meta-analysis (UVMA) considers each parameter individually, while multivariate meta-analysis (MVMA) considers the parameters jointly and accounts for the correlation between their estimates. The performance of MVMA and UVMA has been extensively compared in scenarios with two parameters. Our objective is to compare the performance of MVMA and UVMA as the number of parameters, p, increases. Specifically, we show that (i) for fixed-effect (FE) meta-analysis, the benefit from using MVMA can substantially increase as p increases; (ii) for random effects (RE) meta-analysis, the benefit from MVMA can increase as p increases, but the potential improvement is modest in the presence of high between-study variability and the actual improvement is further reduced by the need to estimate an increasingly large between study covariance matrix; and (iii) when there is little to no between-study variability, the loss of efficiency due to choosing RE MVMA over FE MVMA increases as p increases. We demonstrate these three features through theory, simulation, and a meta-analysis of risk factors for non-Hodgkin lymphoma.

  14. Accuracy analysis of continuous deformation monitoring using BeiDou Navigation Satellite System at middle and high latitudes in China

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Xi, Ruijie; Chen, Hua; Xiao, Yugang

    2017-02-01

    As BeiDou Navigation Satellite System (BDS) has been operational in the whole Asia-Pacific region, it means a new GNSS system with a different satellite orbit structure will become available for deformation monitoring in the future. Conversely, GNSS deformation monitoring data are always processed with a regular interval to form displacement time series for deformation analysis, where the interval can neither be too long from the time perspective nor too short from the precision of determined displacements angle. In this paper, two experimental platforms were designed, with one being at mid-latitude and another at higher latitude in China. BDS data processing software was also developed for investigating the accuracy of continuous deformation monitoring using current in-orbit BDS satellites. Data over 20 days at both platforms were obtained and were processed every 2, 4 and 6 h to generate 3 displacement time series for comparison. The results show that with the current in-orbit BDS satellites, in the mid-latitude area it is easy to achieve accuracy of 1 mm in horizontal component and 2-3 mm in vertical component; the accuracy could be further improved to approximately 1 mm in both horizontal and vertical directions when combined BDS/GPS measurements are employed. At higher latitude, however, the results are not as good as expected due to poor satellite geometry, even the 6 h solutions could only achieve accuracy of 4-6 and 6-10 mm in horizontal and vertical components, respectively, which implies that it may not be applicable to very high-precision deformation monitoring at high latitude using the current BDS. With the integration of BDS and GPS observations, however, in 4-h session, the accuracy can achieve 2 mm in horizontal component and 4 mm in vertical component, which would be an optimal choice for high-accuracy structural deformation monitoring at high latitude.

  15. Spatio-temporal analysis of the accuracy of tropical multisatellite precipitation analysis 3B42 precipitation data in mid-high latitudes of China.

    PubMed

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  16. Georeferencing Accuracy Analysis of a Single WORLDVIEW-3 Image Collected Over Milan

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Roncoroni, F.; Brumana, R.; Previtali, M.

    2016-06-01

    The use of rational functions has become a standard for very high-resolution satellite imagery (VHRSI). On the other hand, the overall geolocalization accuracy via direct georeferencing from on board navigation components is much worse than image ground sampling distance (predicted < 3.5 m CE90 for WorldView-3, whereas GSD = 0.31 m for panchromatic images at nadir). This paper presents the georeferencing accuracy results obtained from a single WorldView-3 image processed with a bias compensated RPC camera model. Orientation results for an image collected over Milan are illustrated and discussed for both direct and indirect georeferencing strategies as well as different bias correction parameters estimated from a set of ground control points. Results highlight that the use of a correction based on two shift parameters is optimal for the considered dataset.

  17. DEM extraction and its accuracy analysis with ground-based SAR interferometry

    NASA Astrophysics Data System (ADS)

    Dong, J.; Yue, J. P.; Li, L. H.

    2014-03-01

    Two altimetry models extracting DEM (Digital Elevation Model) with the GBSAR (Ground-Based Synthetic Aperture Radar) technology are studied and their accuracies are analyzed in detail. The approximate and improved altimetry models of GBSAR were derived from the spaceborne radar altimetry based on the principles of the GBSAR technology. The error caused by the parallel ray approximation in the approximate model was analyzed quantitatively, and the results show that the errors cannot be ignored for the ground-based radar system. For the improved altimetry model, the elevation error expression can be acquired by simulating and analyzing the error propagation coefficients of baseline length, wavelength, differential phase and range distance in the mathematical model. By analyzing the elevation error with the baseline and range distance, the results show that the improved altimetry model is suitable for high-precision DEM and the accuracy can be improved by adjusting baseline and shortening slant distance.

  18. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively.

  19. The analysis of measurement accuracy of the parallel binocular stereo vision system

    NASA Astrophysics Data System (ADS)

    Yu, Huan; Xing, Tingwen; Jia, Xin

    2016-09-01

    Parallel binocular stereo vision system is a special form of binocular vision system. In order to simulate the human eyes observation state, the two cameras used to obtain images of the target scene are placed parallel to each other. This paper built a triangular geometric model, analyzed the structure parameters of parallel binocular stereo vision system and the correlations between them, and discussed the influences of baseline distance B between two cameras, the focal length f, the angle of view ω and other structural parameters on the accuracy of measurement. This paper used Matlab software to test the error function of parallel binocular stereo vision system under different structure parameters, and the simulation results showed the range of structure parameters when errors were small, thereby improved the accuracy of parallel binocular stereo vision system.

  20. Analysis of "Accuracy evaluation of five blood glucose monitoring systems: the North American comparator trial".

    PubMed

    Fournier, Paul A

    2013-09-01

    In an article in Journal of Diabetes Science and Technology, Halldorsdottir and coauthors examined the accuracy of five blood glucose monitoring systems (BGMSs) in a study sponsored by the manufacturer of the BGMS CONTOUR NEXT EZ (EZ) and found that this BGMS was the most accurate one. However, their findings must be viewed critically given that one of the BGMSs (ACCU-CHEK Aviva) was not compared against the reference measurement specified by its manufacturer, thus making it likely that it performed suboptimally. Also, the accuracy of the glucose-oxidase-based ONE TOUCH Ultra2 and TRUEtrack BGMS is likely to have been underestimated because of the expected low oxygen level in the glycolysed blood samples used to test the performance of these BGMSs under hypoglycemic conditions. In conclusion, although this study shows that EZ is an accurate BGMS, comparisons between this and other BGMSs should be interpreted with caution.

  1. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute

  2. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  3. Real time hybrid simulation with online model updating: An analysis of accuracy

    NASA Astrophysics Data System (ADS)

    Ou, Ge; Dyke, Shirley J.; Prakash, Arun

    2017-02-01

    In conventional hybrid simulation (HS) and real time hybrid simulation (RTHS) applications, the information exchanged between the experimental substructure and numerical substructure is typically restricted to the interface boundary conditions (force, displacement, acceleration, etc.). With additional demands being placed on RTHS and recent advances in recursive system identification techniques, an opportunity arises to improve the fidelity by extracting information from the experimental substructure. Online model updating algorithms enable the numerical model of components (herein named the target model), that are similar to the physical specimen to be modified accordingly. This manuscript demonstrates the power of integrating a model updating algorithm into RTHS (RTHSMU) and explores the possible challenges of this approach through a practical simulation. Two Bouc-Wen models with varying levels of complexity are used as target models to validate the concept and evaluate the performance of this approach. The constrained unscented Kalman filter (CUKF) is selected for using in the model updating algorithm. The accuracy of RTHSMU is evaluated through an estimation output error indicator, a model updating output error indicator, and a system identification error indicator. The results illustrate that, under applicable constraints, by integrating model updating into RTHS, the global response accuracy can be improved when the target model is unknown. A discussion on model updating parameter sensitivity to updating accuracy is also presented to provide guidance for potential users.

  4. Fiber-optical sensor with miniaturized probe head and nanometer accuracy based on spatially modulated low-coherence interferogram analysis.

    PubMed

    Depiereux, Frank; Lehmann, Peter; Pfeifer, Tilo; Schmitt, Robert

    2007-06-10

    Fiber-optical sensors have some crucial advantages compared with rigid optical systems. They allow miniaturization and flexibility of system setups. Nevertheless, optical principles such as low-coherence interferometry can be realized by use of fiber optics. We developed and realized an approach for a fiber-optical sensor, which is based on the analysis of spatially modulated low-coherence interferograms. The system presented consists of three units, a miniaturized sensing probe, a broadband fiber-coupled light source, and an adapted Michelson interferometer, which is used as an optical receiver. Furthermore, the signal processing procedure, which was developed for the interferogram analysis in order to achieve nanometer measurement accuracy, is discussed. A system prototype has been validated thoroughly in different experiments. The results approve the accuracy of the sensor.

  5. Spectrophotometric analysis of color changes in teeth incinerated at increasing temperatures.

    PubMed

    Rubio, Leticia; Sioli, Jose Manuel; Suarez, Juan; Gaitan, Maria Jesus; Martin-de-las-Heras, Stella

    2015-07-01

    Color changes produced by histological alterations in burned teeth can provide conclusive forensic information on the temperature of exposure. The objective was to correlate heat-induced color changes in incinerated teeth with increases in temperature (to 1200°C). Spectrophotometry was used to measure lightness, chromaticity (a* and b*), whiteness, and yellowness in 80 teeth heated at temperatures of 100, 200, 400, 600, 800, 1000, or 1200°C for 60 min. Chromaticity a* was reduced at 100°C and lightness at 200 and 400°C, while chromaticity b* and yellowness were reduced at 400 and 600°C. Higher temperatures (800, 1000, and 1200°C) produced progressive increases in lightness and whiteness but reductions in chromaticity b* and yellowness. The accuracy of color values to determine the temperature of exposure was determined by Receiver Operating Characteristic analysis. High accuracy was shown by lightness, chromaticity b* and yellowness values for temperatures between 800° and 1200°C, by whiteness for temperatures of 1000° and 1200°C, and by lightness for temperatures of 200° and 400°C, with sensitivity and specificity values ranging from 90% to 100%. According to these results, colorimetric analysis of incinerated teeth can be used to estimate the temperature of exposure with high accuracy, with lightness being the most useful variable.

  6. [Accuracy analysis of computer tomography imaging for medical modeling purposes on the example of Siemens Sensation 10 scanner].

    PubMed

    Miechowicz, Sławomir; Urbanik, Andrzej; Chrzan, Robert; Grochowska, Anna

    2010-01-01

    Medical model is a material model of human body part, used for better visualization or surgery planning. It may be produced by Rapid Prototyping method, based on data obtained during medical imaging (computer tomography--CT, magnetic resonance--MR). Important problem is to provide proper spatial accuracy of the model, influenced by imaging accuracy of CT and MR scanners. The aim of the study is the accuracy analysis of CT imaging for medical modeling purposes on the example of Siemens Sensation 10 scanner. Using stereolithography technique a physical pattern--phantom in the form of grating was produced. The phantom was measured by a Coordinate Measuring Machine Leitz PMM 12106 to consider production process inaccuracy. Then the phantom was examined using CT scanner Siemens Sensation 10. Phantom measurement error distribution was determined, based on the data obtained. Maximal measurement error, considering both phantom production inaccuracy and CT imaging inaccuracy was +/- 0.87 mm, while considering only CT imaging inaccuracy was not exceeding 0.28 mm. CT acquisition process is by itself the source of measurement errors. So to provide high quality of medical models produced by Rapid Prototyping methods, it is necessary to perform accuracy measurements for every CT scanner used for obtaing data serving as the base for model production.

  7. Analysis Article: Accuracy of the DIDGET Glucose Meter in Children and Young Adults with Diabetes

    PubMed Central

    Kim, Sarah

    2011-01-01

    Diabetes is one of the most common chronic diseases among American children. Although studies show that intensive management, including frequent glucose testing, improves diabetes control, this is difficult to accomplish. Bayer's DIDGET® glucose meter system pairs with a popular handheld video game system and couples good blood glucose testing habits with video-game-based rewards. In this issue, Deeb and colleagues performed a study demonstrating the accuracy of the DIDGET meter, a critical asset to this novel product designed to alleviate some of the challenges of managing pediatric diabetes. PMID:22027311

  8. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  9. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley; Leviton, Duoglas

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA s Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  10. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  11. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy

    PubMed Central

    Cook, Michael J; Puri, Basant K

    2016-01-01

    The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID

  12. Accuracy of pathological diagnosis of mesothelioma cases in Japan: clinicopathological analysis of 382 cases.

    PubMed

    Takeshima, Yukio; Inai, Kouki; Amatya, Vishwa Jeet; Gemba, Kenichi; Aoe, Keisuke; Fujimoto, Nobukazu; Kato, Katsuya; Kishimoto, Takumi

    2009-11-01

    Incidences of mesothelioma are on the rise in Japan. However, the accurate frequency of mesothelioma occurrence is still unknown. The aim of this study is to clarify the accuracy of pathological diagnosis of mesothelioma. Among the 2742 mesothelioma death cases extracted from the document "Vital Statistics of Japan" for 2003-2005, pathological materials were obtained for 382 cases. After these materials were reviewed and immunohistochemical analyses were conducted, mesothelioma was diagnosed by discussions based on clinical and radiological information. Sixty-five cases (17.0%) were categorized as "definitely not/unlikely" mesotheliomas, and 273 cases (71.5%) were categorized as "probable/definite" mesotheliomas. The percentage of "probable/definite" pleural and peritoneal mesothelioma cases in males was 74.3% and 87.5%, respectively, and that of pleural cases in females was 59.2%; however, the percentage of "probable/definite" peritoneal cases in females was only 22.2%. These results suggest that the diagnostic accuracy of mesothelioma is relatively low in females and in cases of peritoneal and sarcomatoid subtype mesotheliomas; furthermore, approximately 15% of cases of deaths due to mesothelioma in Japan are diagnostically suspicious.

  13. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    PubMed

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  14. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    PubMed Central

    Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824

  15. Empowering Multi-Cohort Gene Expression Analysis to Increase Reproducibility

    PubMed Central

    Haynes, Winston A; Vallania, Francesco; Liu, Charles; Bongen, Erika; Tomczak, Aurelie; Andres-Terrè, Marta; Lofgren, Shane; Tam, Andrew; Deisseroth, Cole A; Li, Matthew D; Sweeney, Timothy E

    2016-01-01

    A major contributor to the scientific reproducibility crisis has been that the results from homogeneous, single-center studies do not generalize to heterogeneous, real world populations. Multi-cohort gene expression analysis has helped to increase reproducibility by aggregating data from diverse populations into a single analysis. To make the multi-cohort analysis process more feasible, we have assembled an analysis pipeline which implements rigorously studied meta-analysis best practices. We have compiled and made publicly available the results of our own multi-cohort gene expression analysis of 103 diseases, spanning 615 studies and 36,915 samples, through a novel and interactive web application. As a result, we have made both the process of and the results from multi-cohort gene expression analysis more approachable for non-technical users. PMID:27896970

  16. Cost analysis can help a group practice increase revenues.

    PubMed

    Migliore, Sherry

    2002-02-01

    Undertaking a cost analysis to determine the cost of providing specific services can help group practices negotiate increased payment and identify areas for cost reduction. An OB/GYN practice in Pennsylvania undertook a cost analysis using the resource-based relative value system. Using data from the cost analysis, the practice was able to negotiate increased payment for some of its services. The practice also was able to target some of its fixed costs for reduction. Another result of the analysis was that the practice was able to focus marketing efforts on some of its most profitable, elective services, thereby increasing revenues. In addition, the practice was able to reduce the provision of unprofitable services.

  17. Meta-analysis for diagnostic accuracy studies: a new statistical model using beta-binomial distributions and bivariate copulas.

    PubMed

    Kuss, Oliver; Hoyer, Annika; Solms, Alexander

    2014-01-15

    There are still challenges when meta-analyzing data from studies on diagnostic accuracy. This is mainly due to the bivariate nature of the response where information on sensitivity and specificity must be summarized while accounting for their correlation within a single trial. In this paper, we propose a new statistical model for the meta-analysis for diagnostic accuracy studies. This model uses beta-binomial distributions for the marginal numbers of true positives and true negatives and links these margins by a bivariate copula distribution. The new model comes with all the features of the current standard model, a bivariate logistic regression model with random effects, but has the additional advantages of a closed likelihood function and a larger flexibility for the correlation structure of sensitivity and specificity. In a simulation study, which compares three copula models and two implementations of the standard model, the Plackett and the Gauss copula do rarely perform worse but frequently better than the standard model. We use an example from a meta-analysis to judge the diagnostic accuracy of telomerase (a urinary tumor marker) for the diagnosis of primary bladder cancer for illustration.

  18. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2015-08-11

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  19. A preliminary analysis of human factors affecting the recognition accuracy of a discrete word recognizer for C3 systems

    NASA Astrophysics Data System (ADS)

    Yellen, H. W.

    1983-03-01

    Literature pertaining to Voice Recognition abounds with information relevant to the assessment of transitory speech recognition devices. In the past, engineering requirements have dictated the path this technology followed. But, other factors do exist that influence recognition accuracy. This thesis explores the impact of Human Factors on the successful recognition of speech, principally addressing the differences or variability among users. A Threshold Technology T-600 was used for a 100 utterance vocubalary to test 44 subjects. A statistical analysis was conducted on 5 generic categories of Human Factors: Occupational, Operational, Psychological, Physiological and Personal. How the equipment is trained and the experience level of the speaker were found to be key characteristics influencing recognition accuracy. To a lesser extent computer experience, time or week, accent, vital capacity and rate of air flow, speaker cooperativeness and anxiety were found to affect overall error rates.

  20. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  1. The modified equation approach to the stability and accuracy analysis of finite-difference methods

    NASA Technical Reports Server (NTRS)

    Warming, R. F.; Hyett, B. J.

    1974-01-01

    The stability and accuracy of finite-difference approximations to simple linear partial differential equations are analyzed by studying the modified partial differential equation. Aside from round-off error, the modified equation represents the actual partial differential equation solved when a numerical solution is computed using a finite-difference equation. The modified equation is derived by first expanding each term of a difference scheme in a Taylor series and then eliminating time derivatives higher than first order by certain algebraic manipulations. The connection between 'heuristic' stability theory based on the modified equation approach and the von Neumann (Fourier) method is established. In addition to the determination of necessary and sufficient conditions for computational stability, a truncated version of the modified equation can be used to gain insight into the nature of both dissipative and dispersive errors.

  2. An Autoclavable Steerable Cannula Manual Deployment Device: Design and Accuracy Analysis.

    PubMed

    Burgner, Jessica; Swaney, Philip J; Bruns, Trevor L; Clark, Marlena S; Rucker, D Caleb; Burdette, E Clif; Webster, Robert J

    2012-12-01

    Accessing a specific, predefined location identified in medical images is a common interventional task for biopsies and drug or therapy delivery. While conventional surgical needles provide little steerability, concentric tube continuum devices enable steering through curved trajectories. These devices are usually developed as robotic systems. However, manual actuation of concentric tube devices is particularly useful for initial transfer into the clinic since the Food and Drug Administration (FDA) and Institutional Review Board (IRB) approval process of manually operated devices is simple compared to their motorized counterparts. In this paper, we present a manual actuation device for the deployment of steerable cannulas. The design focuses on compactness, modularity, usability, and sterilizability. Further, the kinematic mapping from joint space to Cartesian space is detailed for an example concentric tube device. Assessment of the device's accuracy was performed in free space, as well as in an image-guided surgery setting, using tracked 2D ultrasound.

  3. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  4. Accuracy Analysis for Digitized Sunspot Hand-drawing Records of Purple Mountain Observatory

    NASA Astrophysics Data System (ADS)

    Li, R. Y.; Zhou, T. H.; Ji, K. F.

    2016-05-01

    Sunspot is the most significant feature in solar disk, and the earliest record of the solar activities. It has been systematically observed for about 400 years after the invention of telescopes. The long-term evolution of solar activities, especially the 11 year solar cycle, has been obtained based on these data to a great extent. In recent years, the historical hand-drawing records of sunspot are processed digitally for permanent preserving and computer processing. The hand-drawing records of sunspot will be eventually replaced by the CCD image in the future, therefore it is necessary to evaluate the accuracy of hand-drawing records by comparing them with the CCD image. In this study, 189 digital hand-drawing records of sunspot observed by Purple Mountain Observatory in 2011 are analyzed. The results include: (1) the scanner scale difference between horizontal and vertical directions is 0.2%; (2) the ring of the Sun on the recording paper is not a perfect circle, and the diameter in the east-west direction is 1% shorter than that in the north-south direction; (3) the orientation error of the record paper can reach up to 0.5 degree in scanning. After comparing the sunspot position of hand-drawing records with the simultaneous SDO/HMI (Solar Dynamics Observatory/Helioseismic and Magnetic Imager) images of continuous spectrum by overlapping method, we find that the accuracy of sunspot position in hand-drawing record is about 7 arcsec. There are roughly 3% of the hand-drawing sunspots whose corresponding sunspots in SDO/HMI images can not be found.

  5. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  6. Time Efficiency and Diagnostic Accuracy of New Automated Myocardial Perfusion Analysis Software in 320-Row CT Cardiac Imaging

    PubMed Central

    Rief, Matthias; Stenzel, Fabian; Kranz, Anisha; Schlattmann, Peter

    2013-01-01

    Objective We aimed to evaluate the time efficiency and diagnostic accuracy of automated myocardial computed tomography perfusion (CTP) image analysis software. Materials and Methods 320-row CTP was performed in 30 patients, and analyses were conducted independently by three different blinded readers by the use of two recent software releases (version 4.6 and novel version 4.71GR001, Toshiba, Tokyo, Japan). Analysis times were compared, and automated epi- and endocardial contour detection was subjectively rated in five categories (excellent, good, fair, poor and very poor). As semi-quantitative perfusion parameters, myocardial attenuation and transmural perfusion ratio (TPR) were calculated for each myocardial segment and agreement was tested by using the intraclass correlation coefficient (ICC). Conventional coronary angiography served as reference standard. Results The analysis time was significantly reduced with the novel automated software version as compared with the former release (Reader 1: 43:08 ± 11:39 min vs. 09:47 ± 04:51 min, Reader 2: 42:07 ± 06:44 min vs. 09:42 ± 02:50 min and Reader 3: 21:38 ± 3:44 min vs. 07:34 ± 02:12 min; p < 0.001 for all). Epi- and endocardial contour detection for the novel software was rated to be significantly better (p < 0.001) than with the former software. ICCs demonstrated strong agreement (≥ 0.75) for myocardial attenuation in 93% and for TPR in 82%. Diagnostic accuracy for the two software versions was not significantly different (p = 0.169) as compared with conventional coronary angiography. Conclusion The novel automated CTP analysis software offers enhanced time efficiency with an improvement by a factor of about four, while maintaining diagnostic accuracy. PMID:23323027

  7. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs.

    PubMed

    Lane, D M; Hill, S A; Huntingford, J L; Lafuente, P; Wall, R; Jones, K A

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman's rho, Cramer's V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters' SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways.

  8. Effectiveness of slow motion video compared to real time video in improving the accuracy and consistency of subjective gait analysis in dogs

    PubMed Central

    Lane, D.M.; Hill, S.A.; Huntingford, J.L.; Lafuente, P.; Wall, R.; Jones, K.A.

    2015-01-01

    Objective measures of canine gait quality via force plates, pressure mats or kinematic analysis are considered superior to subjective gait assessment (SGA). Despite research demonstrating that SGA does not accurately detect subtle lameness, it remains the most commonly performed diagnostic test for detecting lameness in dogs. This is largely because the financial, temporal and spatial requirements for existing objective gait analysis equipment makes this technology impractical for use in general practice. The utility of slow motion video as a potential tool to augment SGA is currently untested. To evaluate a more accessible way to overcome the limitations of SGA, a slow motion video study was undertaken. Three experienced veterinarians reviewed video footage of 30 dogs, 15 with a diagnosis of primary limb lameness based on history and physical examination, and 15 with no indication of limb lameness based on history and physical examination. Four different videos were made for each dog, demonstrating each dog walking and trotting in real time, and then again walking and trotting in 50% slow motion. For each video, the veterinary raters assessed both the degree of lameness, and which limb(s) they felt represented the source of the lameness. Spearman’s rho, Cramer’s V, and t-tests were performed to determine if slow motion video increased either the accuracy or consistency of raters’ SGA relative to real time video. Raters demonstrated no significant increase in consistency or accuracy in their SGA of slow motion video relative to real time video. Based on these findings, slow motion video does not increase the consistency or accuracy of SGA values. Further research is required to determine if slow motion video will benefit SGA in other ways. PMID:26623383

  9. Theoretical study of the accuracy of the pulse method, frontal analysis, and frontal analysis by characteristic points for the determination of single component adsorption isotherms

    SciTech Connect

    Kaczmarski, Krzysztof; Guiochon, Georges A

    2009-01-01

    The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventional procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.

  10. Comparing the Classification Accuracy among Nonparametric, Parametric Discriminant Analysis and Logistic Regression Methods.

    ERIC Educational Resources Information Center

    Ferrer, Alvaro J. Arce; Wang, Lin

    This study compared the classification performance among parametric discriminant analysis, nonparametric discriminant analysis, and logistic regression in a two-group classification application. Field data from an organizational survey were analyzed and bootstrapped for additional exploration. The data were observed to depart from multivariate…

  11. Clinicopathological Significance and Diagnostic Accuracy of c-MET Expression by Immunohistochemistry in Gastric Cancer: A Meta-Analysis

    PubMed Central

    Pyo, Jung-Soo; Kang, Guhyun

    2016-01-01

    Purpose The aim of the present study was to elucidate the clinicopathological significance and diagnostic accuracy of immunohistochemistry (IHC) for determining the mesenchymal epidermal transition (c-MET) expression in patients with gastric cancer (GC). Materials and Methods The present meta-analysis investigated the correlation between c-MET expression as determined by IHC and the clinicopathological parameters in 8,395 GC patients from 37 studies that satisfied the eligibility criteria. In addition, a concordance analysis was performed between c-MET expression as determined by IHC and c-MET amplification, and the diagnostic test accuracy was reviewed. Results The estimated rate of c-MET overexpression was 0.403 (95% confidence interval [CI], 0.327~0.484) and it was significantly correlated with male patients, poor differentiation, lymph node metastasis, higher TNM stage, and human epidermal growth factor receptor 2 (HER2) positivity in IHC analysis. There was a significant correlation between c-MET expression and worse overall survival rate (hazard ratio, 1.588; 95% CI, 1.266~1.992). The concordance rates between c-MET expression and c-MET amplification were 0.967 (95% CI, 0.916~0.987) and 0.270 (95% CI, 0.173~0.395) for cases with non-overexpressed and overexpressed c-MET, respectively. In the diagnostic test accuracy review, the pooled sensitivity and specificity were 0.56 (95% CI, 0.50~0.63) and 0.79 (95% CI, 0.77~0.81), respectively. Conclusions The c-MET overexpression as determined by IHC was significantly correlated with aggressive tumor behavior and positive IHC status for HER2 in patients with GC. In addition, the c-MET expression status could be useful in the screening of c-MET amplification in patients with GC. PMID:27752391

  12. High-accuracy and long-range Brillouin optical time-domain analysis sensor based on the combination of pulse prepump technique and complementary coding

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Tu, Xiaobo; Lu, Yang; Sun, Shilin; Meng, Zhou

    2016-06-01

    A Brillouin optical time-domain analysis (BOTDA) sensor that combines the conventional complementary coding with the pulse prepump technique for high-accuracy and long-range distributed sensing is implemented and analyzed. The employment of the complementary coding provides an enhanced signal-to-noise ratio (SNR) of the sensing system and an extended sensing distance, and the measurement time is also reduced compared with a BOTDA sensor using linear coding. The combination of pulse prepump technique enables the establishment of a preactivated acoustic field in each pump pulse of the complementary codeword, which ensures measurements of high spatial resolution and high frequency accuracy. The feasibility of the prepumped complementary coding is analyzed theoretically and experimentally. The experiments are carried out beyond 50-km single-mode fiber, and experimental results show the capabilities of the proposed scheme to achieve 1-m spatial resolution with temperature and strain resolutions equal to ˜1.6°C and ˜32 μɛ, and 2-m spatial resolution with temperature and strain resolutions equal to ˜0.3°C and ˜6 μɛ, respectively. A longer sensing distance with the same spatial resolution and measurement accuracy can be achieved through increasing the code length of the prepumped complementary code.

  13. Accuracy analysis of the Null-Screen method for the evaluation of flat heliostats

    NASA Astrophysics Data System (ADS)

    Cebrian-Xochihuila, P.; Huerta-Carranza, O.; Díaz-Uribe, R.

    2016-04-01

    In this work we develop an algorithm to determinate the accuracy of the Null-Screen Method, used for the testing of flat heliostats used as solar concentrators in a central tower configuration. We simulate the image obtained on a CCD camera when an orderly distribution of points are displayed on a Null-Screen perpendicular to the heliostat under test. The deformations present in the heliostat are represented as a cosine function of the position with different periods and amplitudes. As a resolution criterion, a deformation on the mirror can be detected when the differences in position between the spots on the image plane for the deformed surface as compared with those obtained for an ideally flat heliostat are equal to one pixel. For 6.4μm pixel size and 18mm focal length, the minimum deformation we can measure in the heliostat, correspond to amplitude equal a 122μm for a period equal to 1m; this is equivalent to 0.8mrad in slope. This result depends on the particular configuration used during the test and the size of the heliostat.

  14. Accuracy in certification of cause of death in a tertiary care hospital--a retrospective analysis.

    PubMed

    Dash, Shreemanta Kumar; Behera, Basanta Kumar; Patro, Shubhransu

    2014-05-01

    Every physician is duty bound to issue a "Cause of Death" certificate in the unfortunate event death of his/her patient. Incomplete and inaccurate entry in these certificates poses difficulty in obtaining reliable information pertaining to causes of mortality, leads to faulty public health surveillance, and causes hindrance in research. This study intends to evaluate the completeness and accuracy of Medical Certification of Cause of Death in our Institute and to formulate strategy to improve the quality of reporting of cause of death. During the period from January 2012 to December 2012, a total of 151 certificates of cause of death were issued by the faculty members of various departments. Maximum number of death certificates were issued for patients in the extremes of the age <10 years (n = 42, 27.82%) and in >60 years (n = 46, 30.46%). The various inadequacies observed by us are as follows: 40 (26.49%) cases had inaccurate cause of death, interval between onset and terminal event was missing in 94 (62.25%) cases, in 68 (45.03%)cases the seal with registration number of the physician was not available on the certificate, incomplete antecedent & underlying cause of death was found in 35 (23.18%) & 84 (55.63%) cases, in 66 (43.71%) cases there was use of abbreviations and the handwriting was illegible in 79(52.32%) cases.

  15. High-Accuracy Analysis of Compton Scattering in Chiral EFT: Proton and Neutron Polarisabilities

    NASA Astrophysics Data System (ADS)

    Griesshammer, Harald W.; Phillips, Daniel R.; McGovern, Judith A.

    2013-10-01

    Compton scattering from protons and neutrons provides important insight into the structure of the nucleon. A new extraction of the static electric and magnetic dipole polarisabilities αE 1 and βM 1 of the proton and neutron from all published elastic data below 300 MeV in Chiral Effective Field Theory shows that within the statistics-dominated errors, the proton and neutron polarisabilities are identical, i.e. no iso-spin breaking effects of the pion cloud are seen. Particular attention is paid to the precision and accuracy of each data set, and to an estimate of residual theoretical uncertainties. ChiEFT is ideal for that purpose since it provides a model-independent estimate of higher-order corrections and encodes the correct low-energy dynamics of QCD, including, for few-nucleon systems used to extract neutron polarisabilities, consistent nuclear currents, rescattering effects and wave functions. It therefore automatically respects the low-energy theorems for photon-nucleus scattering. The Δ (1232) as active degree of freedom is essential to realise the full power of the world's Compton data.Its parameters are constrained in the resonance region. A brief outlook is provided on what kind of future experiments can improve the database. Supported in part by UK STFC, DOE, NSF, and the Sino-German CRC 110.

  16. A New High-Accuracy Analysis of Compton Scattering in Chiral EFT: Neutron Polarisabilities

    NASA Astrophysics Data System (ADS)

    Griesshammer, Harald W.; McGovern, Judith A.; Phillips, Daniel R.

    2015-04-01

    Low-energy Compton scattering tests the symmetries and interaction strengths of a target's internal degrees of freedom in the electric and magnetic fields of a real, external photon. In the single-nucleon sector, information is often compressed into the static scalar dipole polarisabilities which are experimentally not directly accessible but encode information on the pion cloud and the Δ(1232) excitation. The interaction of the photon with the charged pion-exchange also provides a conceptually clean probe of few-nucleon binding. After demonstrating the statistical consistency of the world's γd dataset including the new data from the MAX-IV collaboration described in the preceding talk, we present a new extraction of the neutron polarisabilities in Chiral Effective Field Theory: αn = [ 11 . 55 +/- 1 . 25(stat) +/- 0 . 2(BSR) +/- 0 . 8(th) ] and βn = [ 3 . 65 -/+ 1 . 25(stat) +/- 0 . 2(BSR) -/+ 0 . 8(th) ] , in 10-4 fm3, with χ2 = 45 . 2 for 44 degrees of freedom. The new data reduced the statistical uncertainties by 30%. We discuss data accuracy and consistency, the role of the Δ(1232) , and an estimate of residual theoretical uncertainties. Within statistical and systematic errors, proton and neutron polarisabilities remain identical. Supported in part by UK STFC and US DOE.

  17. Wave-front analysis with high accuracy by use of a double-grating lateral shearing interferometer.

    PubMed

    Leibbrandt, G W; Harbers, G; Kunst, P J

    1996-11-01

    A phase-stepped double-grating lateral shearing interferometer to be used for wave-front analysis is presented. The resulting interference patterns are analyzed with a differential Zernike polynomial matrix-inversion method. Possible error sources are analyzed in the design stage, and it is shown that the inaccuracy can be kept within 2-5 mλ rms. The apparatus was tested and evaluated in practice. Comparison with a phase-stepped Twyman-Green interferometer demonstrates that the accuracy of the two methods is comparable. Lateral shearing interferometry scores better on reproducibility, owing to the stability and robustness of the method.

  18. An analysis of the accuracy of an initial value representation surface hopping wave function in the interaction and asymptotic regions.

    PubMed

    Sergeev, Alexey; Herman, Michael F

    2006-07-14

    The behavior of an initial value representation surface hopping wave function is examined. Since this method is an initial value representation for the semiclassical solution of the time independent Schrodinger equation for nonadiabatic problems, it has computational advantages over the primitive surface hopping wave function. The primitive wave function has been shown to provide transition probabilities that accurately compare with quantum results for model problems. The analysis presented in this work shows that the multistate initial value representation surface hopping wave function should approach the primitive result in asymptotic regions and provide transition probabilities with the same level of accuracy for scattering problems as the primitive method.

  19. Analysis of accuracy of approximate, simultaneous, nonlinear confidence intervals on hydraulic heads in analytical and numerical test cases

    USGS Publications Warehouse

    Hill, M.C.

    1989-01-01

    Inaccuracies in parameter values, parameterization, stresses, and boundary conditions of analytical solutions and numerical models of groundwater flow produce errors in simulated hydraulic heads. These errors can be quantified in terms of approximate, simultaneous, nonlinear confidence intervals presented in the literature. Approximate confidence intervals can be applied in both error and sensitivity analysis and can be used prior to calibration or when calibration was accomplished by trial and error. The method is expanded for use in numerical problems, and the accuracy of the approximate intervals is evaluated using Monte Carlo runs. Four test cases are reported. -from Author

  20. [The innovative approach to improvement of accuracy of cytological analysis: application of intrascopic technology].

    PubMed

    Men'shikov, V V

    2014-10-01

    The article presents information concerning development of device Cell-CTtm by the American specialists to implement cytological analysis of 3D images of cells using technology of computer tomography by means of visible photons and not x-rays beams. Since density of single cell is insufficient for x-ray analysis. The isotropic resolution of Cell-Cttm permits receiving highly contrast and neatly separating signal from noise image of cell structures in 200 nm range. The device is able to apply mathematical apparatus of computer tomography under analysis and classification of cells.

  1. Flux analysis in plant metabolic networks: increasing throughput and coverage.

    PubMed

    Junker, Björn H

    2014-04-01

    Quantitative information about metabolic networks has been mainly obtained at the level of metabolite contents, transcript abundance, and enzyme activities. However, the active process of metabolism is represented by the flow of matter through the pathways. These metabolic fluxes can be predicted by Flux Balance Analysis or determined experimentally by (13)C-Metabolic Flux Analysis. These relatively complicated and time-consuming methods have recently seen significant improvements at the level of coverage and throughput. Metabolic models have developed from single cell models into whole-organism dynamic models. Advances in lab automation and data handling have significantly increased the throughput of flux measurements. This review summarizes advances to increase coverage and throughput of metabolic flux analysis in plants.

  2. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    NASA Astrophysics Data System (ADS)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  3. Type I Error Inflation in the Traditional By-Participant Analysis to Metamemory Accuracy: A Generalized Mixed-Effects Model Perspective

    ERIC Educational Resources Information Center

    Murayama, Kou; Sakaki, Michiko; Yan, Veronica X.; Smith, Garry M.

    2014-01-01

    In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are…

  4. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory.

  5. Accuracy of visual inspection with acetic acid and with Lugol's iodine for cervical cancer screening: Meta-analysis.

    PubMed

    Qiao, Liang; Li, Bo; Long, Mei; Wang, Xiao; Wang, Anrong; Zhang, Guonan

    2015-09-01

    The aim of this review was to provide an updated summary estimation of the accuracy of visual inspection with acetic acid (VIA) and with Lugol's iodine (VILI) in detecting cervical cancer and precancer. Studies on VIA/VILI accuracy were eligible in which VIA/VILI was performed on asymptomatic women who all underwent confirmatory testing of histology, combination of colposcopy and histology, or combination of multiple screening tests, colposcopy and histology, to detect cervical intraepithelial neoplasia grade 2 or worse (CIN2+ or CIN3+). A bivariate model was fitted to estimate the accuracy of VIA/VILI and provide estimates of heterogeneity. Subgroup analysis was used to investigate the source of heterogeneity. A total of 29 studies on VIA and 19 studies on VILI were included finally in the meta-analysis. The summary sensitivity and specificity of VIA for CIN2+ were 73.2% (95%CI: 66.5-80.0%) and 86.7% (95%CI: 82.9-90.4%), respectively, and those for VILI were 88.1% (95%CI: 81.5-94.7%) and 85.9% (95%CI: 81.7-90.0%), respectively. VIA and VILI were both more sensitive in detecting more severe outcome, although there was a slight loss in specificity. Apparent heterogeneity existed in sensitivity and specificity for both VIA and VILI. High sensitivity of both VIA and VILI for CIN2+ was found when a combination of colposcopy and histology was used as disease confirmation. VIA, VILI, even a combination of them in parallel, could be good options for cervical screening in low-resource settings. Significant differences in sensitivity between different gold standards might provide a proxy for optimization of ongoing cervical cancer screening programs.

  6. The influence of accuracy, grid size, and interpolation method on the hydrological analysis of LiDAR derived dems: Seneca Nation of Indians, Irving NY

    NASA Astrophysics Data System (ADS)

    Clarkson, Brian W.

    Light Detection and Ranging (LiDAR) derived Digital Elevation Models (DEMs) provide accurate, high resolution digital surfaces for precise topographic analysis. The following study investigates the accuracy of LiDAR derived DEMs by calculating the Root Mean Square Error (RMSE) of multiple interpolation methods with grid cells ranging from 0.5 to 10-meters. A raster cell with smaller dimensions will drastically increase the amount of detail represented in the DEM by increasing the number of elevation values across the study area. Increased horizontal resolutions have raised the accuracy of the interpolated surfaces and the contours generated from the digitized landscapes. As the raster grid cells decrease in size, the level of detail of hydrological processes will significantly improve compared to coarser resolutions including the publicly available National Elevation Datasets (NEDs). Utilizing a LiDAR derived DEM with the lowest RMSE as the 'ground truth', watershed boundaries were delineated for a sub-basin of the Clear Creek Watershed within the territory of the Seneca Nation of Indians located in Southern Erie County, NY. An investigation of the watershed area and boundary location revealed considerable differences comparing the results of applying different interpretation methods on DEM datasets of different horizontal resolutions. Stream networks coupled with watersheds were used to calculate peak flow values for the 10-meter NEDs and LiDAR derived DEMs.

  7. DEM generated from InSAR in mountainous terrain and its accuracy analysis

    NASA Astrophysics Data System (ADS)

    Hu, Hongbing; Zhan, Yulan

    2011-02-01

    Digital Elevation Model (DEM) derived from survey data is accurate but it is very expensive and time-consuming. In recent years, remote sensing techniques including Synthetic Apenture Radar Interferometry (InSAR) had been developed as a powerful method to derive high precision DEM, especially in mountainous or deep forest areas. The purpose of this paper is to illustrate the principle of InSAR and show the result of a case study in Gejiu city, Yunnan province, China. The accuracy of DEM derived from InSAR (abbreviation as InSAR-DEM) is also evaluated by comparing it with DEM generated from topographic map at the scale of 1:50000 (abbreviation as TOP-DEM). The result shows that: (1)The general precision of the whole selected area acquired by subtracting InSAR-DEM from TOP-DEM is that the maximum, the minimum, the RMSE, and the mean of difference of the two DEMs are 203m, -188m, 26.9m and 5.7m respectively. (2)The topographic trend represented by the two DEMs is coincident, even though TOP-DEM is finer than InSAR-DEM, especial at the valley. (3) Contour maps with the interval of 100m and 50m converted from InSAR-DEM and TOP-DEM respectively show accordant relief trend. Contour from TOP-DEM is smoother than that of from InSAR-DEM, while Contour from InSAR-DEM has more islands than that of from TOP-DEM.(4) Coherence has great influence on the precision of InSAR-DEM, the precision of low-coherence area approaches 100 m while that of high-coherence area can up to m level. (5) The relief trend of 6 profiles represented by InSAR-DEM and TOP-DEM is accordant with tiny difference in partial minutiae. InSAR-DEM displays hypsographies at relative flat areas including surface of water, which reflects the influence of flat earth on InSAR to a certain extent.

  8. Effects of a Training Package to Improve the Accuracy of Descriptive Analysis Data Recording

    ERIC Educational Resources Information Center

    Mayer, Kimberly L.; DiGennaro Reed, Florence D.

    2013-01-01

    Functional behavior assessment is an important precursor to developing interventions to address a problem behavior. Descriptive analysis, a type of functional behavior assessment, is effective in informing intervention design only if the gathered data accurately capture relevant events and behaviors. We investigated a training procedure to improve…

  9. Factors Related to Sight-Reading Accuracy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Mishra, Jennifer

    2014-01-01

    The purpose of this meta-analysis was to determine the extent of the overall relationship between previously tested variables and sight-reading. An exhaustive survey of the available research literature was conducted resulting in 92 research studies that reported correlations between sight-reading and another variable. Variables ("n" =…

  10. Accuracy of Teachers' Judgments of Students' Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Sudkamp, Anna; Kaiser, Johanna; Moller, Jens

    2012-01-01

    This meta-analysis summarizes empirical results on the correspondence between teachers' judgments of students' academic achievement and students' actual academic achievement. The article further investigates theoretically and methodologically relevant moderators of the correlation between the two measures. Overall, 75 studies reporting…

  11. Accuracy of two forms of infrared image analysis of the masticatory muscles in the diagnosis of myogenous temporomandibular disorder.

    PubMed

    Rodrigues-Bigaton, Delaine; Dibai-Filho, Almir Vieira; Packer, Amanda Carine; Costa, Ana Cláudia de Souza; de Castro, Ester Moreira

    2014-01-01

    The aim of the present study was to assess the accuracy of two forms of infrared image analysis (area and extension) of the masseter and anterior temporalis muscles in the diagnosis of myogenous temporomandibular disorder (TMD). A cross-sectional study was carried out involving 104 female volunteers from the university community. Following the application of the Research Diagnostic Criteria for Temporomandibular Disorders, the volunteers were divided into a TMD group (n = 52) and control group (n = 52), and evaluated using infrared thermography. The area and extension of the masseter and anterior temporalis muscles were measured on the images. The receiver operating characteristic (ROC) curve was used to determine diagnostic accuracy (area under the curve), best cutoff point, sensitivity and specificity. A significant difference in skin temperature between groups was only found in the measurement of the area of the left anterior temporalis muscle (p = 0.011). The area under the ROC curve was less than the reference values for all muscles evaluated in the analyses of area and extension. Thus, neither method of infrared thermography tested for the quantification of the masseter and anterior temporalis muscles (analysis of area and extension) is consistent with the RDC/TMD for the diagnosis of myogenous TMD in women.

  12. Accuracy and feasibility of video analysis for assessing hamstring flexibility and validity of the sit-and-reach test.

    PubMed

    Mier, Constance M

    2011-12-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R > .97). Test-retest (separate days) reliability for SR was high in men (R = .97) and women R = .98) moderate for PSLR in men (R = .79) and women (R = .89). SR validity (PSLR as criterion) was higher in women (Day 1, r = .69; Day 2, r = .81) than men (Day 1, r = .64; Day 2, r = .66). In conclusion, video analysis is accurate and feasible for assessing static joint angles, PSLR and SR tests are very reliable methods for assessing flexibility, and the SR validity for hamstring flexibility was found to be moderate in women and low in men.

  13. Accuracy and usefulness of the AVOXimeter 4000 as routine analysis of carboxyhemoglobin.

    PubMed

    Fujihara, Junko; Kinoshita, Hiroshi; Tanaka, Naoko; Yasuda, Toshihiro; Takeshita, Haruo

    2013-07-01

    The measurement of blood carboxyhemoglobin (CO-Hb) is important to determine the cause of death. The AVOXimeter 4000 (AVOX), a portable CO-oximeter, has the advantages of a low purchase price and operating cost, ease of operation, and rapid results. Little information is available on the usefulness of AVOX in the forensic sample, and the previous study investigated only six samples. Therefore, in this study, we confirmed the usefulness of the AVOX through a comparison of its results with data previously obtained using the double wavelength spectrophotometric method in autopsies. Regression analysis was performed between CO-Hb levels measured by the AVOX and those measured by the conventional double wavelength spectrophotometric method in postmortem blood samples: a significant correlation was observed. This study suggests the usefulness of the AVOX to analyze postmortem blood, and the AVOX is suitable for routine forensic analysis and can be applied at the crime scene.

  14. Optimizing statistical classification accuracy of satellite remotely sensed imagery for supporting fast flood hydrological analysis

    NASA Astrophysics Data System (ADS)

    Alexakis, Dimitrios; Agapiou, Athos; Hadjimitsis, Diofantos; Retalis, Adrianos

    2012-06-01

    The aim of this study is to improve classification results of multispectral satellite imagery for supporting flood risk assessment analysis in a catchment area in Cyprus. For this purpose, precipitation and ground spectroradiometric data have been collected and analyzed with innovative statistical analysis methods. Samples of regolith and construction material were in situ collected and examined in the spectroscopy laboratory for their spectral response under consecutive different conditions of humidity. Moreover, reflectance values were extracted from the same targets using Landsat TM/ETM+ images, for drought and humid time periods, using archived meteorological data. The comparison of the results showed that spectral responses for all the specimens were less correlated in cases of substantial humidity, both in laboratory and satellite images. These results were validated with the application of different classification algorithms (ISODATA, maximum likelihood, object based, maximum entropy) to satellite images acquired during time period when precipitation phenomena had been recorded.

  15. High Accuracy, High Energy He-Erd Analysis of H,C, and T

    SciTech Connect

    Browning, James F.; Langley, Robert A.; Doyle, Barney L.; Banks, James C.; Wampler, William R.

    1999-07-22

    A new analysis technique using high-energy helium ions for the simultaneous elastic recoil detection of all three hydrogen isotopes in metal hydride systems extending to depths of several {micro}m's is presented. Analysis shows that it is possible to separate each hydrogen isotope in a heavy matrix such as erbium to depths of 5 {micro}m using incident 11.48MeV {sup 4}He{sup 2} ions with a detection system composed of a range foil and {Delta}E-E telescope detector. Newly measured cross sections for the elastic recoil scattering of {sup 4}He{sup 2} ions from protons and deuterons are presented in the energy range 10 to 11.75 MeV for the laboratory recoil angle of 30{degree}.

  16. Software for Real-Time Analysis of Subsonic Test Shot Accuracy

    DTIC Science & Technology

    2014-03-01

    used the C++ programming language, the Open Source Computer Vision ( OpenCV ®) software library, and Microsoft Windows® Application Programming...video for comparison through OpenCV image analysis tools. Based on the comparison, the software then computed the coordinates of each shot relative to...DWB researchers wanted to use the Open Source Computer Vision ( OpenCV ) software library for capturing and analyzing frames of video. OpenCV contains

  17. Quotation accuracy in medical journal articles—a systematic review and meta-analysis

    PubMed Central

    Jergas, Hannah

    2015-01-01

    Background. Quotations and references are an indispensable element of scientific communication. They should support what authors claim or provide important background information for readers. Studies indicate, however, that quotations not serving their purpose—quotation errors—may be prevalent. Methods. We carried out a systematic review, meta-analysis and meta-regression of quotation errors, taking account of differences between studies in error ascertainment. Results. Out of 559 studies screened we included 28 in the main analysis, and estimated major, minor and total quotation error rates of 11,9%, 95% CI [8.4, 16.6] 11.5% [8.3, 15.7], and 25.4% [19.5, 32.4]. While heterogeneity was substantial, even the lowest estimate of total quotation errors was considerable (6.7%). Indirect references accounted for less than one sixth of all quotation problems. The findings remained robust in a number of sensitivity and subgroup analyses (including risk of bias analysis) and in meta-regression. There was no indication of publication bias. Conclusions. Readers of medical journal articles should be aware of the fact that quotation errors are common. Measures against quotation errors include spot checks by editors and reviewers, correct placement of citations in the text, and declarations by authors that they have checked cited material. Future research should elucidate if and to what degree quotation errors are detrimental to scientific progress. PMID:26528420

  18. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    PubMed Central

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  19. Studies on accuracy of trichothecene multitoxin analysis using stable isotope dilution assays.

    PubMed

    Asam, S; Rychlik, M

    2007-12-01

    Critical parameters in mycotoxin analysis were examined by using stable isotope-labelled tricho-thecenes. Sample weight was downsized to 1 g without loosing precision when sufficiently homogenized samples were taken for analysis. Complete extraction of trichothecenes could be achieved with a solvent mixture of acetonitrile+water (84+16; v+v) even without the use of stable isotope labelled standards. However, in particular for the analysis of deoxynivalenol the absolute amount of water in the solvent volume used for extraction appeared critical. Depending on the matrix a low water amount resulted in too low quantitative values when no stable isotope-labelled standards are applied to correct for incomplete extraction. In this case the used extraction volume had to be at least 10 ml for 1 g sample when acetonitrile + water (84+16; v+v) was used as extraction solvent.Losses during sample preparation using two different clean-up columns were not observed. On the contrary, matrix suppression in the ESI-interface of the LC-MS equipment was found to be a serious problem. Depending on the matrix, the latter effect resulted in considerably lower values for trichothecenes when no stable isotope-labelled standards were used to counterbalance this suppression.

  20. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics.

  1. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  2. Analysis of hot-wire measurements accuracy in turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Drózdz, Artur; Elsner, Witold

    2015-09-01

    This paper discusses the issue of measuring velocity fluctuations of turbulent boundary layer using hot-wire probes. The study highlights the problem of spatial resolution, which is essential when measuring small-scales in wall-bounded flows. Additionally, attention was paid to the inconsistency in streamwise fluctuation measurements using single- and X-wire probes. To clarify this problem, the energy spectra using wavelet transformation were calculated. The analysis was performed for turbulent boundary layer flow, which was characterized by Reynolds number based on the friction velocity equal Reτ≈ 1000.

  3. The Accuracy of Screw Axis Analysis Using Position Data from Anatomical Motion Studies.

    DTIC Science & Technology

    1980-05-05

    Hip Motion ........ . 57 7-2 Screw Axis Analysis for the Sacro -iliac Joint. 57 viii " LIST OF FIGURES Figure Title Page 2-1 Systems Anthropometry Data...analyzed are the hip, and the sacro -iliac joint. The bone movements analyzed are the femur moving relative to the left inominate for hip motion, and the...sacrum moving relative to the inominate for the sacro -iliac joint. The cadaver used was a Caucasian male who was 80 years old. The primary cause of

  4. The diagnostic accuracy of pharmacological stress echocardiography for the assessment of coronary artery disease: a meta-analysis

    PubMed Central

    Picano, Eugenio; Molinaro, Sabrina; Pasanisi, Emilio

    2008-01-01

    Background Recent American Heart Association/American College of Cardiology guidelines state that "dobutamine stress echo has substantially higher sensitivity than vasodilator stress echo for detection of coronary artery stenosis" while the European Society of Cardiology guidelines and the European Association of Echocardiography recommendations conclude that "the two tests have very similar applications". Who is right? Aim To evaluate the diagnostic accuracy of dobutamine versus dipyridamole stress echocardiography through an evidence-based approach. Methods From PubMed search, we identified all papers with coronary angiographic verification and head-to-head comparison of dobutamine stress echo (40 mcg/kg/min ± atropine) versus dipyridamole stress echo performed with state-of-the art protocols (either 0.84 mg/kg in 10' plus atropine, or 0.84 mg/kg in 6' without atropine). A total of 5 papers have been found. Pooled weight meta-analysis was performed. Results the 5 analyzed papers recruited 435 patients, 299 with and 136 without angiographically assessed coronary artery disease (quantitatively assessed stenosis > 50%). Dipyridamole and dobutamine showed similar accuracy (87%, 95% confidence intervals, CI, 83–90, vs. 84%, CI, 80–88, p = 0.48), sensitivity (85%, CI 80–89, vs. 86%, CI 78–91, p = 0.81) and specificity (89%, CI 82–94 vs. 86%, CI 75–89, p = 0.15). Conclusion When state-of-the art protocols are considered, dipyridamole and dobutamine stress echo have similar accuracy, specificity and – most importantly – sensitivity for detection of CAD. European recommendations concluding that "dobutamine and vasodilators (at appropriately high doses) are equally potent ischemic stressors for inducing wall motion abnormalities in presence of a critical coronary artery stenosis" are evidence-based. PMID:18565214

  5. Diagnostic accuracy of exhaled nitric oxide in asthma: a meta-analysis of 4,691 participants

    PubMed Central

    Li, Zhenzhen; Qin, Wenzhe; Li, Lei; Wu, Qin; Wang, Youjuan

    2015-01-01

    Asthma is a common airway inflammation, but current methods for diagnosing it are poor. Here we meta-analyze the available evidence on the ability of exhaled nitric oxide (eNO) in asthma to serve as a diagnostic marker of asthma. We systematically searched the PubMed and EMBASE databases, published data on sensitivity, specificity and other measures of diagnostic accuracy of eNO in the diagnosis of asthma were meta-analyzed. The methodological quality of each study was assessed by QUADAS-2 (quality assessment for studies of diagnostic accuracy). Statistical analysis was performed by employing Meta-Disc 1.4 software and STATA. And the measures of accuracy of eNO in the diagnosis of asthma were pooled using random-effects models. A total of nineteen publications reporting twenty-one case-control studies were identified. Pooled results indicated that eNO showed a diagnostic sensitivity of 0.78 (95% CI 0.76 to 0.80), specificity was 0.74 (95% CI 0.72 to 0.76). PLR was 3.70 (95% CI 2.84 to 4.81) and NLR was 0.35 (95% CI 0.26 to 0.47). DOR was 11.37 (95% CI 7.54 to 17.13). Exhaled nitric oxide show insufficient sensitivity and specificity for diagnosing asthma, eNO measurements may be useful in combination with clinical manifestations and conventional tests such as pulmonary function tests, assessment of bronchodilator response and bronchial challenge tests. PMID:26309503

  6. An improving fringe analysis method based on the accuracy of S-transform profilometry

    NASA Astrophysics Data System (ADS)

    Shen, Qiuju; Chen, Wenjing; Zhong, Min; Su, Xianyu

    2014-07-01

    The S transform, as a simple and popular technique for spacetime-frequency analysis, has been introduced in optical three-dimensional surface shape measurement in recent years. Based on the S transform, S transform “ridge” method (STR) and S transform filtering method (STF) have been proposed to extract the phase information from the single deformed fringe pattern. This paper focuses on studying the STR in fringe pattern analysis. In previous researches about the STR, a linear constraint, which assumed that the phase was locally expressed as first-order Taylor expansion with respect to x and y directions, was implicitly added. Actually at least the second-order partial derivatives of the phase in each local area should be taken into account because they are related to the local curvature of the height distribution of the tested object. Therefore, the traditional STR has larger phase measuring errors in those areas with the rapid height variation on the tested object. This paper proposes an improved STR method, in which the phase is approximately expressed as a quadric in each local area. The phase extraction formula based on the quadric is derived, and the phase correction is carried out as well. Both the simulations and the experiments verify that a more accurate phase map can be obtained by the improved method compared with that by the traditional STR, especially in the areas where height variation is steep.

  7. Kinematic Accuracy Analysis of Lead Screw W Insertion Mechanism with Flexibility

    NASA Astrophysics Data System (ADS)

    He, Hu; Zhang, Lei; Kong, Jiayuan

    According to the actual requirements of w insertion, a set of variable lead screw w mechanism was designed, motion characteristics of the mechanism were analyzed and kinematics simulation was carried out with MATLAB. Mechanism precision was analyzed with the analytical method and the error coefficient curve of each component in the mechanism was obtained. Dynamics simulation for rigid mechanism and mechanism with flexibility in different speed was conducted with ADAMS, furthermore, real-time elastic deformation of the flexible Connecting rod was obtained. In consideration of the influences of the elastic connecting rod, the outputs motion error and elastic deformation of components were increased with the speed of the loom.

  8. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  9. The optomechanical analysis of high-accuracy mesh design in optical transmission components

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Ying; Chang, Shenq-Tsong; Huang, Ting-Ming

    2016-09-01

    This paper presents the optomechanical analysis of the thermal effect by the finite difference method (FDM) in refraction optical components. The incident rays through the FDM elements, the temperature, or the stress in the ray path are estimated by weighting. The weighting will introduce some error in the calculated optical path difference (OPD) and bring some high-frequency aberration into the optical simulation; therefore, the mesh design process must consider the optical ray path footprint. The incident and emergence rays' footprints are associated at the lens surface by Patran software; those associated footprints will add into the mesh point at the lens surface. The incident rays separate into several sections; each section can find its nearest grid point in the lens FDM mesh. Thus, moving the nearest grid point to the incident ray section can reduce the weighting or interpolation error in OPD calculations. The calculation results can evaluate the thermal or stress effect in optical transmission components more accurately.

  10. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  11. Utility of composite reference standards and latent class analysis in evaluating the clinical accuracy of diagnostic tests for pertussis.

    PubMed

    Baughman, Andrew L; Bisgard, Kristine M; Cortese, Margaret M; Thompson, William W; Sanden, Gary N; Strebel, Peter M

    2008-01-01

    Numerous evaluations of the clinical sensitivity and specificity of PCR and serologic assays for Bordetella pertussis have been hampered by the low sensitivity of culture, the gold standard test, which leads to biased accuracy estimates. The bias can be reduced by using statistical approaches such as the composite reference standard (CRS) (e.g., positive if culture or serology positive; negative otherwise) or latent class analysis (LCA), an internal reference standard based on a statistical model. We illustrated the benefits of the CRS and LCA approaches by reanalyzing data from a 1995 to 1996 study of cough illness among 212 patients. The accuracy of PCR in this study was evaluated using three reference standards: culture, CRS, and LCA. Using specimens obtained 0 to 34 days after cough onset, estimates of the sensitivity of PCR obtained using CRS (47%) and LCA (34%) were lower than the culture-based estimate (62%). The CRS and LCA approaches, which utilized more than one diagnostic marker of pertussis, likely produced more accurate reference standards than culture alone. In general, the CRS approach is simple, with a well-defined disease status. LCA requires statistical modeling but incorporates more indicators of disease than CRS. When three or more indicators of pertussis are available, these approaches should be used in evaluations of pertussis diagnostic tests.

  12. The accuracy of microRNA-210 in diagnosing lung cancer: a systematic review and meta-analysis

    PubMed Central

    Zhang, Chao; Tong, Zhaohui

    2016-01-01

    Studies examining the diagnostic value of microRNA-210 for lung cancer have yielded inconsistent results. Here, we performed a meta-analysis to assess the diagnostic accuracy of microRNA-210 for lung cancer. Nine eligible studies involving 993 patients (554 lung cancer patients and 439 non-cancer patients) were independently identified, and the quality of these studies was assessed according to Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) guidelines. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.66 (95% CI, 0.57 to 0.75), 0.82 (95% CI, 0.72 to 0.89), 3.64 (95% CI, 2.54 to 5.21), 0.41 (95% CI, 0.34 to 0.51) and 8.78 (95% CI, 6.10 to 12.66), respectively. The area under the summary receiver operator characteristic curve was 0.80 (95% CI, 0.76 to 0.83). These results indicated that microRNA-210 had moderate diagnostic value for lung cancer. Additional prospective studies are needed to confirm the diagnostic value of microRNA-210. PMID:27557519

  13. Change perspective to increase diagnostic accuracy of ultrasonography in calcium pyrophosphate dehydrate deposition disease! A new approach: the axial scan of the meniscus.

    PubMed

    Filippou, G; Picerno, V; Adinolfi, A; Di Sabatino, V; Bertoldi, I; Galeazzi, M; Frediani, B

    2015-03-31

    Ultrasonography (US) is a relevant tool in the study of calcium pyrophosphate dihydrate (CPP) deposition disease. However, differential diagnosis of hyperechoic deposits within the fibrocartilage can be difficult; moreover, US study is limited by the need of an adequate acoustic window. We describe a US scanning technique that offers a new viewpoint in the study of knee meniscal structure: a longitudinal scan performed according to the long axis of meniscus. This technique proves to be particularly useful for the identification of CPP deposition, but could also improve the US diagnostic utility and accuracy in other meniscal pathologies.

  14. Accuracy of intra-operative frozen section analysis of ovarian tumours.

    PubMed

    Gorisek, B; Stare, M Rebolj; Krajnc, I

    2009-01-01

    During operative treatment for ovarian tumours assistance is frequently required to make decisions regarding malignancy status and the extent of the ensuing procedure. Intra-operative frozen section analysis may be useful, provided there is adequate acquaintance with the correlation between using frozen sections and permanent histopathological sections for diagnosis at the institution where the operation is being undertaken. This retrospective study aimed to determine this correlation. Findings from 131 intra-operative frozen sections were compared with the subsequent diagnosis from permanent histopathological sections for women with benign, borderline and malignant ovarian tumours at the Maribor Teaching Hospital (now the University Clinical Centre Maribor) between 1 January 1993 and 31 December 2001. Frozen-section findings corresponded to histopathological findings in 84.7% of cases, with 15.3% false-negative and no false-positive results. For benign, borderline and malignant ovarian tumours, sensitivity was 100.0%, 76.1% and 89.0%, respectively, and specificity was 90.6%, 90.6% and 100.0%, respectively. The majority of errors occurred in diagnosing mucinous borderline tumours. Precise pre-operative diagnosis is extremely important in the treatment of ovarian tumours.

  15. Analysis of factors affecting the accuracy, reproducibility, and interpretation of microbial community carbon source utilization patterns

    USGS Publications Warehouse

    Haack, S.K.; Garchow, H.; Klug, M.J.; Forney, L.J.

    1995-01-01

    We determined factors that affect responses of bacterial isolates and model bacterial communities to the 95 carbon substrates in Biolog microliter plates. For isolates and communities of three to six bacterial strains, substrate oxidation rates were typically nonlinear and were delayed by dilution of the inoculum. When inoculum density was controlled, patterns of positive and negative responses exhibited by microbial communities to each of the carbon sources were reproducible. Rates and extents of substrate oxidation by the communities were also reproducible but were not simply the sum of those exhibited by community members when tested separately. Replicates of the same model community clustered when analyzed by principal- components analysis (PCA), and model communities with different compositions were clearly separated un the first PCA axis, which accounted for >60% of the dataset variation. PCA discrimination among different model communities depended on the extent to which specific substrates were oxidized. However, the substrates interpreted by PCA to be most significant in distinguishing the communities changed with reading time, reflecting the nonlinearity of substrate oxidation rates. Although whole-community substrate utilization profiles were reproducible signatures for a given community, the extent of oxidation of specific substrates and the numbers or activities of microorganisms using those substrates in a given community were not correlated. Replicate soil samples varied significantly in the rate and extent of oxidation of seven tested substrates, suggesting microscale heterogeneity in composition of the soil microbial community.

  16. Evaluation of accuracy and applicability of protein models: retrospective analysis of biological and biomedical predictions.

    PubMed

    Khan, Sofia; Vihinen, Mauno

    2009-01-01

    In order to study protein function and activity structural data is required. Since experimental structures are available for just a small fraction of all known protein sequences, computational methods such as protein modelling can provide useful information. Over the last few decades we have predicted, with homology modelling methods, the structures for numerous proteins. In this study we assess the structural quality and validity of the biological and medical interpretations and predictions made based on the models. All the models had correct scaffolding and were ranked at least as correct or good by numerical evaluators even though the sequence identity with the template was as low as 8%. The biological explanations made based on models were well in line with experimental structures and other experimental studies. Retrospective analysis of homology models indicates the power of protein modelling when made carefully from sequence alignment to model building and refinement. Modelling can be applied to studying and predicting different kinds of biological phenomena and according to our results it can be done so with success.

  17. Accuracy analysis of a new method to estimate chromatic wavefront error

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Pluzhnik, Eugene; Belikov, Ruslan

    2016-07-01

    An internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts: a dedicated instrument undergoing development on the upcoming WFIRST mission, and prime instruments on the large-scale HabEx and LUVOIR mission studies, as well as smaller-scale missions such as ACESAT. To enable direct imaging of exoplanets with an internal coronagraph both diffraction and scattered light from the stellar point spread function must be directly suppressed using the coronagraph instrument or corrected in post-processing. Both of these tasks require estimation of the chromatically-dependent complex electric field in the focal plane either using the main science camera or the integral field spectrograph (IFS) camera. To date, the most common method to estimate the chromaticity of the complex electric field is using a heterodyne term generated by DM probes and requiring sequence of narrowband filters to increase coherence. We extend this concept to enable estimation using direct broadband images using a well-calibrated broadband response matrix of the DM probes. Our broadband focal plane estimation method can be used with a single broadband filter providing an alternative to more complicated methods that require several monochromatic channels or a dedicated integral field spectrograph. This capability can also enable low- cost, low-complexity coronagraph missions. We demonstrate the broadband estimation method using fully 30% bandwidth broadband input light with an optical simulator featuring a PIAA coronagraph.

  18. Accuracy decline of the Brillouin optical time-domain analysis system induced by self-phase modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Yuqing; Chen, Wei; Meng, Zhou

    2016-09-01

    The Brillouin optical time-domain analysis system (BOTDA) is a distributed optical fiber sensing system based on the measurement of the effective Brillouin gain, in which high power pumping pulse is demanded to fulfill optical time domain reflectometry (OTDR) spatial orientation. As for strict rectangular pulse, BGS can maintain Lorentz line profile along the sensor fiber, while the actual rectangular pulse has power transients like the rising edge and failing edge, making BGS broadened or even distorted by the effect of self-phase modulation (SPM), which will induce the decline of the measuring accuracy. A model concerning the effects of pumping pulse power transients on the BGS by means of SPM is established based on the nonlinear Schrodinger equation (NLSE) in regular single mode fiber (SMF).

  19. SU-E-T-99: An Analysis of the Accuracy of TPS Extrapolation of Commissioning Data

    SciTech Connect

    Alkhatib, H; Oves, S; Gebreamlak, W; Mihailidis, D

    2015-06-15

    Purpose: To investigate discrepancies between measured percent depth dose curves of a linear accelerator at depths beyond the commissioning data and those generated by the treatment planning system (TPS) via extrapolation. Methods: Relative depth doses were measured on an Elekta Synergy™ linac for photon beams of 6 -MV and 10-MV. SSDs for all curves were 100-cm and field sizes ranged from 4×4 to 35×35-cm{sup 2}. As most scanning tanks cannot provide depths greater than about 30-cm, percent depth dose measurements, extending 45-cm depths, were performed in Solid Water™ using a 0.125-cc ionization chamber (PTW model TN31012). The buildup regions of the curves were acquired with a parallel plate chamber (PTW model TN34001). Extrapolated curves were generated by the TPS (Phillips Pinnacle{sup 3} v. 9.6) by applying beams to CT images of 50-cm of Solid Water™ with density override set to 1.0-g/cc. Results: Percent difference between the two sets of curves (measured and TPS) was investigated. There is significant discrepancy in the buildup region to a depth of 7-mm. Beyond this depth, the two sets show good agreement. When analyzing the tail end of the curves, we saw percent difference of between 1.2% and 3.2%. The highest disagreement for the 6-MV curves was 10×10-cm{sup 2} (3%) and for the 10-MV curves it was the 35×35-cm{sup 2} (3.2%). Conclusion: A qualitative analysis of the measured data versus PDD curves generated by the TPS shows generally good agreement beyond 1-cm. However, a measurable percent difference was observed when comparing curves at depths beyond that provided by the commissioning data and at depths in the buildup region. Possible explanations for this include inaccuracies in modeling of the Solid Water™ or drift in beam energy since commissioning. Additionally, closer attention must be paid for measurements in the buildup region.

  20. Diagnostic accuracy of Ber-EP4 for metastatic adenocarcinoma in serous effusions: a meta-analysis.

    PubMed

    Wang, Bo; Li, Diandian; Ou, Xuemei; Yi, Qun; Feng, Yulin

    2014-01-01

    Numerous studies have investigated the utility of Ber-EP4 in differentiating metastatic adenocarcinoma (MAC) from malignant epithelial mesothelioma (MM) and/or reactive mesothelial cells (RM) in serous effusions. However, the results remain controversial. The aim of this study is to determine the overall accuracy of Ber-EP4 in serous effusions for MAC through a meta-analysis of published studies. Publications addressing the accuracy of Ber-EP4 in the diagnosis of MAC were selected from the Pubmed, Embase and Cochrane Library. Data from selected studies were pooled to yield summary sensitivity, specificity, positive and negative likelihood ratio (LR), diagnostic odds ratio (DOR), and receiver operating characteristic (SROC) curve. Statistical analysis was performed by Meta-Disc 1.4 and STATA 12.0 softwares. 29 studies, based on 2646 patients, met the inclusion criteria and the summary estimating for Ber-EP4 in the diagnosis of MAC were: sensitivity 0.8 (95% CI: 0.78-0.82), specificity 0.94 (95% CI: 0.93-0.96), positive likelihood ratio (PLR) 12.72 (95% CI: 8.66-18.7), negative likelihood ratio (NLR) 0.18 (95% CI: 0.12-0.26) and diagnostic odds ratio 95.05 (95% CI: 57.26-157.77). The SROC curve indicated that the maximum joint sensitivity and specificity (Q-value) was 0.91; the area under the curve was 0.96. Our findings suggest that BER-EP4 may be a useful diagnostic adjunctive tool for confirming MAC in serous effusions.

  1. Analysis of the lattice Boltzmann Bhatnagar-Gross-Krook no-slip boundary condition: Ways to improve accuracy and stability

    NASA Astrophysics Data System (ADS)

    Verschaeve, Joris C. G.

    2009-09-01

    An analytical and numerical analysis of the no-slip boundary condition at walls at rest for the lattice Boltzmann Bhatnagar-Gross-Krook method is performed. The main result of this analysis is an alternative formulation for the no-slip boundary condition at walls at rest. Numerical experiments assess the accuracy and stability of this formulation for Poiseuille and Womersley flows, flow over a backward facing step, and unsteady flow around a square cylinder. This no-slip boundary condition is compared analytically and numerically to the boundary conditions of Inamuro [Phys. Fluids 7, 2928 (1995)] and Zou and He [Phys. Fluids 9, 1591 (1997)] and it is found that all three make use of the same mechanism for the off-diagonal element of the stress tensor. Mass conservation, however, is only assured by the present one. In addition, our analysis points out which mechanism lies behind the instabilities also observed by Lätt [Phys. Rev. E 77, 056703 (2008)] for this kind of boundary conditions. We present a way to remove these instabilities, allowing one to reach relaxation frequencies considerably closer to 2.

  2. Accuracy of specimen-specific nonlinear finite element analysis for evaluation of radial diaphysis strength in cadaver material.

    PubMed

    Matsuura, Yusuke; Kuniyoshi, Kazuki; Suzuki, Takane; Ogawa, Yasufumi; Sukegawa, Koji; Rokkaku, Tomoyuki; Thoreson, Andrew Ryan; An, Kai-Nan; Takahashi, Kazuhisa

    2015-01-01

    The feasibility of a user-specific finite element model for predicting the in situ strength of the radius after implantation of bone plates for open fracture reduction was established. The effect of metal artifact in CT imaging was characterized. The results were verified against biomechanical test data. Fourteen cadaveric radii were divided into two groups: (1) intact radii for evaluating the accuracy of radial diaphysis strength predictions with finite element analysis and (2) radii with a locking plate affixed for evaluating metal artifact. All bones were imaged with CT. In the plated group, radii were first imaged with the plates affixed (for simulating digital plate removal). They were then subsequently imaged with the locking plates and screws removed (actual plate removal). Fracture strength of the radius diaphysis under axial compression was predicted with a three-dimensional, specimen-specific, nonlinear finite element analysis for both the intact and plated bones (bones with and without the plate captured in the scan). Specimens were then loaded to failure using a universal testing machine to verify the actual fracture load. In the intact group, the physical and predicted fracture loads were strongly correlated. For radii with plates affixed, the physical and predicted (simulated plate removal and actual plate removal) fracture loads were strongly correlated. This study demonstrates that our specimen-specific finite element analysis can accurately predict the strength of the radial diaphysis. The metal artifact from CT imaging was shown to produce an overestimate of strength.

  3. Diagnostic Accuracy of Hepatic Vein Arrival Time Performed with Contrast-Enhanced Ultrasonography for Cirrhosis: A Systematic Review and Meta-Analysis

    PubMed Central

    Kim, Gaeun; Shim, Kwang Yong; Baik, Soon Koo

    2017-01-01

    Background/Aims We identified reports in the literature regarding the diagnostic accuracy of hepatic vein arrival time (HVAT) measured by contrast-enhanced ultrasonography (CEUS) to assess hepatic fibrosis in cirrhosis. Methods The Ovid MEDLINE, Embase, and Cochrane databases were searched for all studies published up to 23 July 2015 that evaluated liver status using CEUS and liver biopsy (LB). The QUADAS-II (quality assessment of diagnostic accuracy studies-II) was applied to assess the internal validity of the diagnostic studies. Selected studies were subjected to a meta-analysis with MetaDisc 1.4 and RevMan 5.3. Results A total of 12 studies including 844 patients with chronic liver disease met our inclusion criteria. The overall summary sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of the HVAT measured by CEUS for the detection of cirrhosis compared to LB were 0.83 (95% confidence interval [CI], 0.77 to 0.89), 0.75 (95% CI, 0.69 to 0.79), 3.45 (95% CI, 1.60 to 7.43), and 0.28 (95% CI, 0.10 to 0.74), respectively. The summary diagnostic odds ratio (random effects model) was 15.23 (95% CI, 3.07 to 75.47), the summary receiver operator characteristics area under the curve was 0.74 (standard error [SE]=0.14), and the index Q was 0.69 (SE=0.11). Conclusions Based on a systematic review, the measurement of HVAT by CEUS exhibited an increased accuracy and correlation for the detection of cirrhosis. PMID:27538445

  4. Analysis of the influence of system parameters on the measurement accuracy of a high spectral resolution lidar

    NASA Astrophysics Data System (ADS)

    Song, Changbo; Boselli, Antonella; Sannino, Alessia; Zhao, Yiming; Spinelli, Nicola; Wang, Xuan

    2016-10-01

    Atmospheric aerosols play very important roles in climate change and air particulate pollution. Lidars based on elastic scattering have been widely used to measure aerosol spatial distribution and to retrieve the profiles of aerosol optical properties by an assumption of the aerosol extinction-to-backscatter ratio. High Spectral Resolution Lidar (HSRL) is one of methods that can be used to measure aerosol optical properties without a-priori hypotheses. Compared to Raman lidar, HSRL has the advantage of day and night measurements and can be adapted to many kinds of carrying platforms. Unlike ordinary elastic backscatter lidar, HSRL needs to separate the Mie signal scattered by atmospheric aerosol and the Rayleigh signal scattered by atmospheric molecules. Due to small spectral difference between Mie and Rayleigh signals, there are three difficulties: firstly, the laser source must have a narrow bandwidth, high energy and stable center wavelength; secondly, the receiver should have a very narrow spectral filter to separate aerosol scattering and molecular scattering; thirdly, the center wavelength of the receiver must be real-time locked to laser source. In order to study the influence of system parameters on the measurement accuracy of a high spectral resolution lidar and to optimize their values, a simulation and analysis has been done and will be presented in this paper. In this paper, the system parameters including the linewidth of emission laser, the bandwidth of the Fabry-Pérot interferometric filter in the receiver and the spectral tracking accuracy between the receiver and laser are mainly analyzed. At the same time, several environmental factors have been considered, including atmospheric temperature and wind, pointing accuracy of platform, aerosol concentration range etc. A typical vertical distribution of atmospheric aerosol optical properties is considered and the received signals of high spectral channels are simulated. From the simulated signals, the

  5. Analysis of Influence of Terrain Relief Roughness on dem Accuracy Generated from LIDAR in the Czech Republic Territory

    NASA Astrophysics Data System (ADS)

    Hubacek, M.; Kovarik, V.; Kratochvil, V.

    2016-06-01

    Digital elevation models are today a common part of geographic information systems and derived applications. The way of their creation is varied. It depends on the extent of area, required accuracy, delivery time, financial resources and technologies available. The first model covering the whole territory of the Czech Republic was created already in the early 1980's. Currently, the 5th DEM generation is being finished. Data collection for this model was realized using the airborne laser scanning which allowed creating the DEM of a new generation having the precision up to a decimetre. Model of such a precision expands the possibilities of employing the DEM and it also offers new opportunities for the use of elevation data especially in a domain of modelling the phenomena dependent on highly accurate data. The examples are precise modelling of hydrological phenomena, studying micro-relief objects, modelling the vehicle movement, detecting and describing historical changes of a landscape, designing constructions etc. Due to a nature of the technology used for collecting data and generating DEM, it is assumed that the resulting model achieves lower accuracy in areas covered by vegetation and in built-up areas. Therefore the verification of model accuracy was carried out in five selected areas in Moravia. The network of check points was established using a total station in each area. To determine the reference heights of check points, the known geodetic points whose heights were defined using levelling were used. Up to several thousands of points were surveyed in each area. Individual points were selected according to a different configuration of relief, different surface types, and different vegetation coverage. The sets of deviations were obtained by comparing the DEM 5G heights with reference heights which was followed by verification of tested elevation model. Results of the analysis showed that the model reaches generally higher precision than the declared one in

  6. The Feasibility and Accuracy of Sentinel Lymph Node Biopsy in Initially Clinically Node-Negative Breast Cancer after Neoadjuvant Chemotherapy: A Systematic Review and Meta-Analysis

    PubMed Central

    Geng, Chong; Chen, Xiao; Pan, Xiaohua; Li, Jiyu

    2016-01-01

    Background With the increased use of neoadjuvant chemotherapy (NAC) in breast cancer, the timing of sentinel lymph node biopsy (SLNB) has become increasingly important. In this study, we aimed to evaluate the feasibility and accuracy of SLNB for initially clinically node-negative breast cancer after NAC by conducting a systematic review and meta-analysis. Methods We searched PubMed, Embase, and the Cochrane Library from January 1, 1993 to November 30, 2015 for studies on initially clinically node-negative breast cancer patients who underwent SLNB after NAC followed by axillary lymph node dissection (ALND). Results A total of 1,456 patients from 16 studies were included in this review. The pooled identification rate (IR) for SLNB was 96% [95% confidence interval (CI): 95%-97%], and the false negative rate (FNR) was 6% (95% CI: 3%-8%). The pooled sensitivity, negative predictive value (NPV) and accuracy rate (AR) were 94% (95% CI: 92%-97%, I2 = 27.5%), 98% (95% CI: 98%-99%, I2 = 42.7%) and 99% (95% CI: 99%-100%, I2 = 32.6%), respectively. In the subgroup analysis, no significant differences were found in either the IR of an SLNB when different mapping methods were used (P = 0.180) or in the FNR between studies with and without immunohistochemistry (IHC) staining (P = 0.241). Conclusion Based on current evidence, SLNB is technically feasible and accurate enough for axillary staging in initially clinically node-negative breast cancer patients after NAC. PMID:27606623

  7. At risk or not at risk? A meta-analysis of the prognostic accuracy of psychometric interviews for psychosis prediction

    PubMed Central

    Fusar-Poli, Paolo; Cappucciati, Marco; Rutigliano, Grazia; Schultze-Lutter, Frauke; Bonoldi, Ilaria; Borgwardt, Stefan; Riecher-Rössler, Anita; Addington, Jean; Perkins, Diana; Woods, Scott W; McGlashan, Thomas H; Lee, Jimmy; Klosterkötter, Joachim; Yung, Alison R; McGuire, Philip

    2015-01-01

    An accurate detection of individuals at clinical high risk (CHR) for psychosis is a prerequisite for effective preventive interventions. Several psychometric interviews are available, but their prognostic accuracy is unknown. We conducted a prognostic accuracy meta-analysis of psychometric interviews used to examine referrals to high risk services. The index test was an established CHR psychometric instrument used to identify subjects with and without CHR (CHR+ and CHR−). The reference index was psychosis onset over time in both CHR+ and CHR− subjects. Data were analyzed with MIDAS (STATA13). Area under the curve (AUC), summary receiver operating characteristic curves, quality assessment, likelihood ratios, Fagan’s nomogram and probability modified plots were computed. Eleven independent studies were included, with a total of 2,519 help-seeking, predominately adult subjects (CHR+: N=1,359; CHR−: N=1,160) referred to high risk services. The mean follow-up duration was 38 months. The AUC was excellent (0.90; 95% CI: 0.87-0.93), and comparable to other tests in preventive medicine, suggesting clinical utility in subjects referred to high risk services. Meta-regression analyses revealed an effect for exposure to antipsychotics and no effects for type of instrument, age, gender, follow-up time, sample size, quality assessment, proportion of CHR+ subjects in the total sample. Fagan’s nomogram indicated a low positive predictive value (5.74%) in the general non-help-seeking population. Albeit the clear need to further improve prediction of psychosis, these findings support the use of psychometric prognostic interviews for CHR as clinical tools for an indicated prevention in subjects seeking help at high risk services worldwide. PMID:26407788

  8. Methods to increase reproducibility in differential gene expression via meta-analysis

    PubMed Central

    Sweeney, Timothy E.; Haynes, Winston A.; Vallania, Francesco; Ioannidis, John P.; Khatri, Purvesh

    2017-01-01

    Findings from clinical and biological studies are often not reproducible when tested in independent cohorts. Due to the testing of a large number of hypotheses and relatively small sample sizes, results from whole-genome expression studies in particular are often not reproducible. Compared to single-study analysis, gene expression meta-analysis can improve reproducibility by integrating data from multiple studies. However, there are multiple choices in designing and carrying out a meta-analysis. Yet, clear guidelines on best practices are scarce. Here, we hypothesized that studying subsets of very large meta-analyses would allow for systematic identification of best practices to improve reproducibility. We therefore constructed three very large gene expression meta-analyses from clinical samples, and then examined meta-analyses of subsets of the datasets (all combinations of datasets with up to N/2 samples and K/2 datasets) compared to a ‘silver standard’ of differentially expressed genes found in the entire cohort. We tested three random-effects meta-analysis models using this procedure. We showed relatively greater reproducibility with more-stringent effect size thresholds with relaxed significance thresholds; relatively lower reproducibility when imposing extraneous constraints on residual heterogeneity; and an underestimation of actual false positive rate by Benjamini–Hochberg correction. In addition, multivariate regression showed that the accuracy of a meta-analysis increased significantly with more included datasets even when controlling for sample size. PMID:27634930

  9. A comparative study of submicron particle sizing platforms: accuracy, precision and resolution analysis of polydisperse particle size distributions.

    PubMed

    Anderson, Will; Kozak, Darby; Coleman, Victoria A; Jämting, Åsa K; Trau, Matt

    2013-09-01

    The particle size distribution (PSD) of a polydisperse or multimodal system can often be difficult to obtain due to the inherent limitations in established measurement techniques. For this reason, the resolution, accuracy and precision of three new and one established, commercially available and fundamentally different particle size analysis platforms were compared by measuring both individual and a mixed sample of monodisperse, sub-micron (220, 330, and 410 nm - nominal modal size) polystyrene particles. The platforms compared were the qNano Tunable Resistive Pulse Sensor, Nanosight LM10 Particle Tracking Analysis System, the CPS Instruments's UHR24000 Disc Centrifuge, and the routinely used Malvern Zetasizer Nano ZS Dynamic Light Scattering system. All measurements were subjected to a peak detection algorithm so that the detected particle populations could be compared to 'reference' Transmission Electron Microscope measurements of the individual particle samples. Only the Tunable Resistive Pulse Sensor and Disc Centrifuge platforms provided the resolution required to resolve all three particle populations present in the mixed 'multimodal' particle sample. In contrast, the light scattering based Particle Tracking Analysis and Dynamic Light Scattering platforms were only able to detect a single population of particles corresponding to either the largest (410 nm) or smallest (220 nm) particles in the multimodal sample, respectively. When the particle sets were measured separately (monomodal) each platform was able to resolve and accurately obtain a mean particle size within 10% of the Transmission Electron Microscope reference values. However, the broadness of the PSD measured in the monomodal samples deviated greatly, with coefficients of variation being ~2-6-fold larger than the TEM measurements across all four platforms. The large variation in the PSDs obtained from these four, fundamentally different platforms, indicates that great care must still be taken in

  10. MicroRNA-155 Hallmarks Promising Accuracy for the Diagnosis of Various Carcinomas: Results from a Meta-Analysis

    PubMed Central

    Wu, Chuancheng; Liu, Qiuyan; Liu, Baoying

    2015-01-01

    Background. Recent studies have shown that microRNAs (miRNAs) have diagnostic values in various cancers. This meta-analysis seeks to summarize the global diagnostic role of miR-155 in patients with a variety of carcinomas. Methods. Eligible studies were retrieved by searching the online databases, and the bivariate meta-analysis model was employed to generate the summary receiver operator characteristic (SROC) curve. Results. A total of 17 studies dealing with various carcinomas were finally included. The results showed that single miR-155 testing allowed for the discrimination between cancer patients and healthy donors with a sensitivity of 0.82 (95% CI: 0.73–0.88) and specificity of 0.77 (95% CI: 0.70–0.83), corresponding to an area under curve (AUC) of 0.85, while a panel comprising expressions of miR-155 yielded a sensitivity of 0.76 (95% CI: 0.68–0.82) and specificity of 0.82 (95% CI: 0.77–0.86) in diagnosing cancers. The subgroup analysis displayed that serum miR-155 test harvested higher accuracy than plasma-based assay (the AUC, sensitivity, and specificity were, resp., 0.87 versus 0.73, 0.78 versus 0.74, and 0.77 versus 0.70). Conclusions. Our data suggest that single miR-155 profiling has a potential to be used as a screening test for various carcinomas, and parallel testing of miR-155 confers an improved specificity compared to single miR-155 analysis. PMID:25918453

  11. Multilateral analysis of increasing collective dose and new ALARA programme.

    PubMed

    Oumi, Tadashi; Morii, Yasuki; Imai, Toshirou

    2011-07-01

    JAPC (The Japan Atomic Power Company) is the only electric power company that operates different types of nuclear reactors in Japan; it operates two BWRs (boiling water reactors), one pressurised water reactor and one gas cooled reactor. JAPC has been conducting various activities aimed at reducing radiation dose received by workers for over 45 y. Recently, the collective dose resulting from periodic maintenance has increased at each plant because of the replacement of large equipment and the unexpected extension of the outage period. In particular, the collective dose at Tokai-2 is one of the highest among Japanese BWR plants((1)), owing to the replacement and strengthening of equipment to meet earthquake-proof requirements. In this study, the authors performed a multilateral analysis of unacceptably a large collective dose and devised a new ALARA programme that includes a 3D dose prediction map and the development of machines to assist workers.

  12. Earthworms increase plant production: a meta-analysis

    PubMed Central

    van Groenigen, Jan Willem; Lubbers, Ingrid M.; Vos, Hannah M. J.; Brown, George G.; De Deyn, Gerlinde B.; van Groenigen, Kees Jan

    2014-01-01

    To meet the challenge of feeding a growing world population with minimal environmental impact, we need comprehensive and quantitative knowledge of ecological factors affecting crop production. Earthworms are among the most important soil dwelling invertebrates. Their activity affects both biotic and abiotic soil properties, in turn affecting plant growth. Yet, studies on the effect of earthworm presence on crop yields have not been quantitatively synthesized. Here we show, using meta-analysis, that on average earthworm presence in agroecosystems leads to a 25% increase in crop yield and a 23% increase in aboveground biomass. The magnitude of these effects depends on presence of crop residue, earthworm density and type and rate of fertilization. The positive effects of earthworms become larger when more residue is returned to the soil, but disappear when soil nitrogen availability is high. This suggests that earthworms stimulate plant growth predominantly through releasing nitrogen locked away in residue and soil organic matter. Our results therefore imply that earthworms are of crucial importance to decrease the yield gap of farmers who can't -or won't- use nitrogen fertilizer. PMID:25219785

  13. Earthworms increase plant production: a meta-analysis.

    PubMed

    van Groenigen, Jan Willem; Lubbers, Ingrid M; Vos, Hannah M J; Brown, George G; De Deyn, Gerlinde B; van Groenigen, Kees Jan

    2014-09-15

    To meet the challenge of feeding a growing world population with minimal environmental impact, we need comprehensive and quantitative knowledge of ecological factors affecting crop production. Earthworms are among the most important soil dwelling invertebrates. Their activity affects both biotic and abiotic soil properties, in turn affecting plant growth. Yet, studies on the effect of earthworm presence on crop yields have not been quantitatively synthesized. Here we show, using meta-analysis, that on average earthworm presence in agroecosystems leads to a 25% increase in crop yield and a 23% increase in aboveground biomass. The magnitude of these effects depends on presence of crop residue, earthworm density and type and rate of fertilization. The positive effects of earthworms become larger when more residue is returned to the soil, but disappear when soil nitrogen availability is high. This suggests that earthworms stimulate plant growth predominantly through releasing nitrogen locked away in residue and soil organic matter. Our results therefore imply that earthworms are of crucial importance to decrease the yield gap of farmers who can't -or won't- use nitrogen fertilizer.

  14. Are the Conventional Commercial Yeast Identification Methods Still Helpful in the Era of New Clinical Microbiology Diagnostics? A Meta-Analysis of Their Accuracy.

    PubMed

    Posteraro, Brunella; Efremov, Ljupcho; Leoncini, Emanuele; Amore, Rosarita; Posteraro, Patrizia; Ricciardi, Walter; Sanguinetti, Maurizio

    2015-08-01

    Accurate identification of pathogenic species is important for early appropriate patient management, but growing diversity of infectious species/strains makes the identification of clinical yeasts increasingly difficult. Among conventional methods that are commercially available, the API ID32C, AuxaColor, and Vitek 2 systems are currently the most used systems in routine clinical microbiology. We performed a systematic review and meta-analysis to estimate and to compare the accuracy of the three systems, in order to assess whether they are still of value for the species-level identification of medically relevant yeasts. After adopting rigorous selection criteria, we included 26 published studies involving Candida and non-Candida yeasts that were tested with the API ID32C (674 isolates), AuxaColor (1,740 isolates), and Vitek 2 (2,853 isolates) systems. The random-effects pooled identification ratios at the species level were 0.89 (95% confidence interval [CI], 0.80 to 0.95) for the API ID32C system, 0.89 (95% CI, 0.83 to 0.93) for the AuxaColor system, and 0.93 (95% CI, 0.89 to 0.96) for the Vitek 2 system (P for heterogeneity, 0.255). Overall, the accuracy of studies using phenotypic analysis-based comparison methods was comparable to that of studies using molecular analysis-based comparison methods. Subanalysis of studies conducted on Candida yeasts showed that the Vitek 2 system was significantly more accurate (pooled ratio, 0.94 [95% CI, 0.85 to 0.99]) than the API ID32C system (pooled ratio, 0.84 [95% CI, 0.61 to 0.99]) and the AuxaColor system (pooled ratio, 0.76 [95% CI, 0.67 to 0.84]) with respect to uncommon species (P for heterogeneity, <0.05). Subanalysis of studies conducted on non-Candida yeasts (i.e., Cryptococcus, Rhodotorula, Saccharomyces, and Trichosporon) revealed pooled identification accuracies of ≥98% for the Vitek 2, API ID32C (excluding Cryptococcus), and AuxaColor (only Rhodotorula) systems, with significant low or null levels of

  15. Are the Conventional Commercial Yeast Identification Methods Still Helpful in the Era of New Clinical Microbiology Diagnostics? A Meta-Analysis of Their Accuracy

    PubMed Central

    Efremov, Ljupcho; Leoncini, Emanuele; Amore, Rosarita; Posteraro, Patrizia; Ricciardi, Walter

    2015-01-01

    Accurate identification of pathogenic species is important for early appropriate patient management, but growing diversity of infectious species/strains makes the identification of clinical yeasts increasingly difficult. Among conventional methods that are commercially available, the API ID32C, AuxaColor, and Vitek 2 systems are currently the most used systems in routine clinical microbiology. We performed a systematic review and meta-analysis to estimate and to compare the accuracy of the three systems, in order to assess whether they are still of value for the species-level identification of medically relevant yeasts. After adopting rigorous selection criteria, we included 26 published studies involving Candida and non-Candida yeasts that were tested with the API ID32C (674 isolates), AuxaColor (1,740 isolates), and Vitek 2 (2,853 isolates) systems. The random-effects pooled identification ratios at the species level were 0.89 (95% confidence interval [CI], 0.80 to 0.95) for the API ID32C system, 0.89 (95% CI, 0.83 to 0.93) for the AuxaColor system, and 0.93 (95% CI, 0.89 to 0.96) for the Vitek 2 system (P for heterogeneity, 0.255). Overall, the accuracy of studies using phenotypic analysis-based comparison methods was comparable to that of studies using molecular analysis-based comparison methods. Subanalysis of studies conducted on Candida yeasts showed that the Vitek 2 system was significantly more accurate (pooled ratio, 0.94 [95% CI, 0.85 to 0.99]) than the API ID32C system (pooled ratio, 0.84 [95% CI, 0.61 to 0.99]) and the AuxaColor system (pooled ratio, 0.76 [95% CI, 0.67 to 0.84]) with respect to uncommon species (P for heterogeneity, <0.05). Subanalysis of studies conducted on non-Candida yeasts (i.e., Cryptococcus, Rhodotorula, Saccharomyces, and Trichosporon) revealed pooled identification accuracies of ≥98% for the Vitek 2, API ID32C (excluding Cryptococcus), and AuxaColor (only Rhodotorula) systems, with significant low or null levels of

  16. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  17. Non-linear partial least square regression increases the estimation accuracy of grass nitrogen and phosphorus using in situ hyperspectral and environmental data

    NASA Astrophysics Data System (ADS)

    Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.

    2013-08-01

    Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.

  18. SU-F-BRF-14: Increasing the Accuracy of Dose Calculation On Cone-Beam Imaging Using Deformable Image Registration in the Case of Prostate Translation

    SciTech Connect

    Fillion, O; Gingras, L; Archambault, L

    2014-06-15

    Purpose: Artifacts can reduce the quality of dose re-calculations on CBCT scans during a treatment. The aim of this project is to correct the CBCT images in order to allow for more accurate and exact dose calculations in the case of a translation of the tumor in prostate cancer. Methods: Our approach is to develop strategies based on deformable image registration algorithms using the elastix software (Klein et al., 2010) to register the treatment planning CT on a daily CBCT scan taken during treatment. Sets of images are provided by a 3D deformable phantom and comprise two CT and two CBCT scans: one of both with the reference anatomy and the others with known deformations (i.e. translations of the prostate). The reference CT is registered onto the deformed CBCT and the deformed CT serves as the control for dose calculation accuracy. The planned treatment used for the evaluation of dose calculation is a 2-Gy fraction prescribed at the location of the reference prostate and assigned to 7 rectangular fields. Results: For a realistic 0.5-cm translation of the prostate, the relative dose discrepancy between the CBCT and the CT control scan at the prostate's centroid is 8.9 ± 0.8 % while dose discrepancy between the registered CT and the control scan lessens to −2.4 ± 0.8 %. For a 2-cm translation, clinical indices like the V90 and the D100 are more accurate by 0.7 ± 0.3 % and 8.0 ± 0.5 cGy respectively when using registered CT than when using CBCT for dose calculation. Conclusion: The results show that this strategy gives doses in agreement within a few percents with those from calculations on actual CT scans. In the future, various deformations of the phantom anatomy will allow a thorough characterization of the registration strategies needed for more complex anatomies.

  19. Pooled analysis of the accuracy of five cervical cancer screening tests assessed in eleven studies in Africa and India.

    PubMed

    Arbyn, Marc; Sankaranarayanan, Rengaswamy; Muwonge, Richard; Keita, Namory; Dolo, Amadou; Mbalawa, Charles Gombe; Nouhou, Hassan; Sakande, Boblewende; Wesley, Ramani; Somanathan, Thara; Sharma, Anjali; Shastri, Surendra; Basu, Parthasarathy

    2008-07-01

    Cervical cancer is the main cancer among women in sub-Saharan Africa, India and other parts of the developing world. Evaluation of screening performance of effective, feasible and affordable early detection and management methods is a public health priority. Five screening methods, naked eye visual inspection of the cervix uteri after application of diluted acetic acid (VIA), or Lugol's iodine (VILI) or with a magnifying device (VIAM), the Pap smear and human papillomavirus testing with the high-risk probe of the Hybrid Capture-2 assay (HC2), were evaluated in 11 studies in India and Africa. More than 58,000 women, aged 25-64 years, were tested with 2-5 screening tests and outcome verification was done on all women independent of the screen test results. The outcome was presence or absence of cervical intraepithelial neoplasia (CIN) of different degrees or invasive cervical cancer. Verification was based on colposcopy and histological interpretation of colposcopy-directed biopsies. Negative colposcopy was accepted as a truly negative outcome. VIA showed a sensitivity of 79% (95% CI 73-85%) and 83% (95% CI 77-89%), and a specificity of 85% (95% CI 81-89%) and 84% (95% CI 80-88%) for the outcomes CIN2+ or CIN3+, respectively. VILI was on average 10% more sensitive and equally specific. VIAM showed similar results as VIA. The Pap smear showed lowest sensitivity, even at the lowest cutoff of atypical squamous cells of undetermined significance (57%; 95% CI 38-76%) for CIN2+ but the specificity was rather high (93%; 95% CI 89-97%). The HC2-assay showed a sensitivity for CIN2+ of 62% (95% CI 56-68%) and a specificity of 94% (95% CI 92-95%). Substantial interstudy variation was observed in the accuracy of the visual screening methods. Accuracy of visual methods and cytology increased over time, whereas performance of HC2 was constant. Results of visual tests and colposcopy were highly correlated. This study was the largest ever done that evaluates the cross

  20. An analysis of the stability and transport of carbon dioxide on Mars and Iapetus: Increasing accuracy via experiments and photometry

    NASA Astrophysics Data System (ADS)

    Blackburn, David Garrison

    Volatile transport of carbon dioxide is most relevant on two planetary bodies in our solar system: Mars and Iapetus. We experimentally measured the sublimation rate of CO2 ice under simulated martian conditions and developed a model based on our experimental results. We experimentally verified that solar irradiance is the primary control for the sublimation of CO2 ice on the martian poles with the amount of radiation striking the surface being controlled by variations in the optical depth, ensuring the formation and sublimation of the seasonal cap. Our model, supported by comparison of MGS-MOC and MRO-HiRISE images, shows that ∼0.4 m is currently being lost from the south perennial cap per martian year. In order to build a similar model for Iapetus, one key parameter was needed: the bolometric Bond albedo. We used photometry of Cassini VIMS observations of Iapetus to produce the first phase integrals calculated directly from solar phase curves of Iapetus for the leading hemisphere and to estimate the phase integrals for the trailing hemisphere. Our phase integrals, which are lower than previous results, have profound implications for the analyses of the energy balance and volatile transport on this icy satellite. We also utilized Cassini VIMS and ISS and Voyager ISS observations of Iapetus to produce the first bolometric Bond albedo map of Iapetus; the average albedo values for the leading and trailing hemispheres are 0.25 ± 0.03 and 0.05 ± 0.01 respectively. On Iapetus, which has no detectable atmosphere, any carbon dioxide sublimating from the dark material, where it was discovered by reflectance spectroscopy, would either escape the body or migrate on ballistic trajectories to a possible polar cold trap. However, through proof by contradiction, we show that if dry ice is the source of the detected signal in the dark material, it produces an impossible scenario where an extensive polar cap is produced along with incorrect temperatures for the dark material at equatorial latitudes. After ruling out surface dry ice as the source, we set upper limits on the amount of CO 2 transport that can occur on Iapetus without forming a polar cap.

  1. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  2. Diagnostic test accuracy of D-dimer for acute aortic syndrome: systematic review and meta-analysis of 22 studies with 5000 subjects

    PubMed Central

    Watanabe, Hiroki; Horita, Nobuyuki; Shibata, Yuji; Minegishi, Shintaro; Ota, Erika; Kaneko, Takeshi

    2016-01-01

    Diagnostic test accuracy of D-dimer for acute aortic dissection (AAD) has not been evaluated by meta-analysis with the bivariate model methodology. Four databases were electrically searched. We included both case-control and cohort studies that could provide sufficient data concerning both sensitivity and specificity of D-dimer for AAD. Non-English language articles and conference abstract were allowed. Intramural hematoma and penetrating aortic ulcer were regarded as AAD. Based on 22 eligible articles consisting of 1140 AAD subjects and 3860 non-AAD subjects, the diagnostic odds ratio was 28.5 (95% CI 17.6–46.3, I2 = 17.4%) and the area under curve was 0.946 (95% CI 0.903–0.994). Based on 833 AAD subjects and 1994 non-AAD subjects constituting 12 studies that used the cutoff value of 500 ng/ml, the sensitivity was 0.952 (95% CI 0.901–0.978), the specificity was 0.604 (95% CI 0.485–0.712), positive likelihood ratio was 2.4 (95% CI 1.8–3.3), and negative likelihood ratio was 0.079 (95% CI 0.036–0.172). Sensitivity analysis using data of three high-quality studies almost replicated these results. In conclusion, D-dimer has very good overall accuracy. D-dimer <500 ng/ml largely decreases the possibility of AAD. D-dimer >500 ng/ml moderately increases the possibility of AAD. PMID:27230962

  3. Enhancing and evaluating diagnostic accuracy.

    PubMed

    Swets, J A; Getty, D J; Pickett, R M; D'Orsi, C J; Seltzer, S E; McNeil, B J

    1991-01-01

    Techniques that may enhance diagnostic accuracy in clinical settings were tested in the context of mammography. Statistical information about the relevant features among those visible in a mammogram and about their relative importances in the diagnosis of breast cancer was the basis of two decision aids for radiologists: a checklist that guides the radiologist in assigning a scale value to each significant feature of the images of a particular case, and a computer program that merges those scale values optimally to estimate a probability of malignancy. A test set of approximately 150 proven cases (including normals and benign and malignant lesions) was interpreted by six radiologists, first in their usual manner and later with the decision aids. The enhancing effect of these feature-analytic techniques was analyzed across subsets of cases that were restricted progressively to more and more difficult cases, where difficulty was defined in terms of the radiologists' judgements in the standard reading condition. Accuracy in both standard and enhanced conditions decreased regularly and substantially as case difficulty increased, but differentially, such that the enhancement effect grew regularly and substantially. For the most difficult case sets, the observed increases in accuracy translated into an increase of about 0.15 in sensitivity (true-positive proportion) for a selected specificity (true-negative proportion) of 0.85 or a similar increase in specificity for a selected sensitivity of 0.85. That measured accuracy can depend on case-set difficulty to different degrees for two diagnostic approaches has general implications for evaluation in clinical medicine. Comparative, as well as absolute, assessments of diagnostic performances--for example, of alternative imaging techniques--may be distorted by inadequate treatments of this experimental variable. Subset analysis, as defined and illustrated here, can be useful in alleviating the problem.

  4. Statistical Analysis of Atmospheric Forecast Model Accuracy - A Focus on Multiple Atmospheric Variables and Location-Based Analysis

    DTIC Science & Technology

    2014-04-01

    as the primary variable for this study. The analysis separated the observation station data into three categories: Valleys, Plains, and Mountains ...each of 1 km, 3 km, and observation data ( Mountain ). Surface temperature (K) is indicated along the x-axis...13 Figure 10. Histogram plots for each of 1 km, 3 km, and observation data ( Mountain ). Surface temperature (K) is indicated along the x-axis

  5. A comparative analysis of the accuracy of implant master casts fabricated from two different transfer impression techniques

    PubMed Central

    Patil, Rupali; Kadam, Pankaj; Oswal, Chetan; Patil, Seema; Jajoo, Shweta; Gachake, Arati

    2016-01-01

    Aim: This study evaluated and compared two impression techniques in terms of their dimensional accuracies to reproduce implant positions on working casts. Materials and Methods: A master model was designed to simulate a clinical situation. Impressions were made using four techniques: (1) Stock open tray (SOT) technique; (2) stock closed tray (SCT) technique; (3) custom open tray (COT) technique; and (3) custom closed tray (CCT) technique. Reference points on the hexagonal silhouette of the implant on master model and onto the analogs of the obtained master casts were compared after using the four impression techniques. Measurements were made using an optical microscope, capable of recording under 50x magnifications. The means and standard deviations of all the groups and subgroups were calculated and statically analyzed using analysis of variance (ANOVA) and Tukey's test. Results: The open tray impressions showed significantly less variation from the master model and all the techniques studied were comparable. Conclusion: All the techniques studied shown some distortion. COT showed the most accurate results of all the techniques. PMID:27114954

  6. Use of multivariate analysis to improve the accuracy of radionuclide angiography with stress in detecting coronary artery disease in men

    SciTech Connect

    Greenberg, P.S.; Bible, M.; Ellestad, M.H.; Berge, R.; Johnson, K.; Hayes, M.

    1983-01-01

    A multivariate analysis (MVA) system was derived retrospectively from a population of 76 males with coronary artery disease and 18 control subjects. Posterior probabilities were then derived from such a system prospectively in a new male population of 11 subjects with normal coronary arteries and hemodynamics and 63 patients with coronary artery disease. The sensitivity was 84% compared to that for change in ejection fraction (delta EF) greater than or equal to 5 criterion of 71% (p less than 0.01), the specificity was 91% compared to 73% for the delta EF greater than or equal to 5 criterion (p greater than 0.05), and the correct classification rate was 85% compared to 72% for the delta EF greater than or equal to 5 criterion (p less than 0.01). The significant variables were: change in EF with exercise, percent maximal heart rate, change in end-diastolic volume (delta EDV) with exercise, change in R wave, and exercise duration. Application of the multivariate approach to radionuclide imaging with stress, including both exercise and nuclear parameters, significantly improved the diagnostic accuracy of the test and allowed for a probability statement concerning the likelihood of disease.

  7. Accuracy of Self-reported Hypertension, Diabetes, and Hypercholesterolemia: Analysis of a Representative Sample of Korean Older Adults

    PubMed Central

    Chun, Heeran; Kim, Il-Ho; Min, Kyung-Duk

    2015-01-01

    Objectives This study will assess the accuracy of self-reported hypertension, diabetes, and hypercholesterolemia among Korean older adults. Methods Using data from the fourth Korean National Health Examination and Nutrition Survey (KNHANES IV, 2007–2009), we selected 7,270 individuals aged 50 years and older who participated in both a health examination and a health interview survey. Self-reported prevalence of hypertension (HTN), diabetes mellitus (DM), and hypercholesterolemia was compared with measured data (arterial systolic/diastolic blood pressure, fasting glucose, and total cholesterol). Results An agreement between self-reported and measured data was only moderate for hypercholesterolemia (κ, 0.48), even though it was high for HTN (κ, 0.72) and DM (κ, 0. 82). Sensitivity was low in hypercholesterolemia (46.7%), but high in HTN and DM (73% and 79.3%, respectively). Multiple analysis shows that predictors for sensitivity differed by disease. People with less education were more likely to exhibit lower sensitivity to HTN and hypercholesterolemia, and people living in rural areas were less sensitive to DM and hypercholesterolemia. Conclusion Caution is needed in interpreting the results of community studies using self-reported data on chronic diseases, especially hypercholesterolemia, among adults aged 50 years and older. PMID:27169009

  8. Estimating subsurface water volumes and transit times in Hokkaido river catchments, Japan, using high-accuracy tritium analysis

    NASA Astrophysics Data System (ADS)

    Gusyev, Maksym; Yamazaki, Yusuke; Morgenstern, Uwe; Stewart, Mike; Kashiwaya, Kazuhisa; Hirai, Yasuyuki; Kuribayashi, Daisuke; Sawano, Hisaya

    2015-04-01

    The goal of this study is to estimate subsurface water transit times and volumes in headwater catchments of Hokkaido, Japan, using the New Zealand high-accuracy tritium analysis technique. Transit time provides insights into the subsurface water storage and therefore provides a robust and quick approach to quantifying the subsurface groundwater volume. Our method is based on tritium measurements in river water. Tritium is a component of meteoric water, decays with a half-life of 12.32 years, and is inert in the subsurface after the water enters the groundwater system. Therefore, tritium is ideally suited for characterization of the catchment's responses and can provide information on mean water transit times up to 200 years. Only in recent years has it become possible to use tritium for dating of stream and river water, due to the fading impact of the bomb-tritium from thermo-nuclear weapons testing, and due to improved measurement accuracy for the extremely low natural tritium concentrations. Transit time of the water discharge is one of the most crucial parameters for understanding the response of catchments and estimating subsurface water volume. While many tritium transit time studies have been conducted in New Zealand, only a limited number of tritium studies have been conducted in Japan. In addition, the meteorological, orographic and geological conditions of Hokkaido Island are similar to those in parts of New Zealand, allowing for comparison between these regions. In 2014, three field trips were conducted in Hokkaido in June, July and October to sample river water at river gauging stations operated by the Ministry of Land, Infrastructure, Transport and Tourism (MLIT). These stations have altitudes between 36 m and 860 m MSL and drainage areas between 45 and 377 km2. Each sampled point is located upstream of MLIT dams, with hourly measurements of precipitation and river water levels enabling us to distinguish between the snow melt and baseflow contributions

  9. GALA: group analysis leads to accuracy, a novel approach for solving the inverse problem in exploratory analysis of group MEG recordings

    PubMed Central

    Kozunov, Vladimir V.; Ossadtchi, Alexei

    2015-01-01

    Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses

  10. APOBEC3 deletion increases the risk of breast cancer: a meta-analysis

    PubMed Central

    Sun, Meili; Wang, Shuyun; Zhou, Guanzhou; Sun, Yuping

    2016-01-01

    Recently, a deletion in the human apolipoprotein B mRNA-editing enzyme catalytic polypeptide-like 3 (APOBEC3) gene cluster has been associated with a modest increased risk of breast cancer, but studies yielded inconsistent results. Therefore we performed a meta-analysis to derive a more precise conclusion. Six studies including 18241 subjects were identified by searching PubMed and Embase databases from inception to April 2016. Pooled odds ratios (ORs) and corresponding 95% confidence intervals (CIs) were evaluated under allele contrast, dominant, recessive, homozygous, and heterozygous models. All the analyses suggested a correlation of APOBEC3 deletion with increased breast cancer risk (D vs I: OR = 1.29, 95% CI = 1.23-1.36; D/D+I/D vs I/I: OR = 1.34, 95% CI = 1.26-1.43; D/D vs I/D+ I/I: OR = 1.51, 95% CI = 1.36-1.68; D/D vs I/I: OR = 1.75, 95% CI= 1.56-1.95; I/D vs I/I: OR = 1.28, 95% CI = 1.19-1.36). Stratified analysis by ethnicity showed that the relationship is stronger and more stable in Asians. In summary, our current work indicated that APOBEC3 copy number variations might have a good screening accuracy for breast cancer. PMID:27602762

  11. Increased Throughput of Proteomics Analysis by Multiplexing High-resolution Tandem Mass Spectra

    PubMed Central

    Ledvina, A. R.; Savitski, M. M.; Zubarev, A. R.; Good, D. M.; Coon, J. J.; Zubarev, R. A.

    2014-01-01

    High-resolution and accuracy Fourier-transform mass spectrometry (FTMS) is becoming increasingly attractive due to its specificity. However, the speed of tandem FTMS analysis severely limits the competitive advantage of this approach relative to faster low-resolution quadrupole ion trap MS/MS instruments. Here we demonstrate an entirely FTMS-based analysis method with a 2.5–3.0 fold greater throughput than a conventional FT MS/MS approach. The method consists of accumulating together the MS/MS fragments ions from multiple precursors, with subsequent high-resolution analysis of the mixture. Following acquisition, the multiplexed spectrum is deconvoluted into individual MS/MS datasets which are separately submitted for peptide identification to a search engine. The method is tested both in silico using a database of MS/MS spectra as well as in situ using a modified LTQ-Orbitrap mass spectrometer. The performance of the method in the experiment was consistent with theoretical expectations. PMID:21913643

  12. Increased throughput of proteomics analysis by multiplexing high-resolution tandem mass spectra.

    PubMed

    Ledvina, A R; Savitski, M M; Zubarev, A R; Good, D M; Coon, J J; Zubarev, R A

    2011-10-15

    High-resolution and high-accuracy Fourier transform mass spectrometry (FTMS) is becoming increasingly attractive due to its specificity. However, the speed of tandem FTMS analysis severely limits the competitive advantage of this approach relative to faster low-resolution quadrupole ion trap MS/MS instruments. Here we demonstrate an entirely FTMS-based analysis method with a 2.5-3.0-fold greater throughput than a conventional FT MS/MS approach. The method consists of accumulating together the MS/MS fragments ions from multiple precursors, with subsequent high-resolution analysis of the mixture. Following acquisition, the multiplexed spectrum is deconvoluted into individual MS/MS spectra which are then combined into a single concatenated file and submitted for peptide identification to a search engine. The method is tested both in silico using a database of MS/MS spectra as well as in situ using a modified LTQ Orbitrap mass spectrometer. The performance of the method in the experiment was consistent with theoretical expectations.

  13. The regulatory benefits of high levels of affect perception accuracy: a process analysis of reactions to stressors in daily life.

    PubMed

    Robinson, Michael D; Moeller, Sara K; Buchholz, Maria M; Boyd, Ryan L; Troop-Gordon, Wendy

    2012-08-01

    Individuals attuned to affective signals from the environment may possess an advantage in the emotion-regulation realm. In two studies (total n = 151), individual differences in affective perception accuracy were assessed in an objective, performance-based manner. Subsequently, the same individuals completed daily diary protocols in which daily stressor levels were reported as well as problematic states shown to be stress-reactive in previous studies. In both studies, individual differences in affect perception accuracy interacted with daily stressor levels to predict the problematic outcomes. Daily stressors precipitated problematic reactions--whether depressive feelings (study 1) or somatic symptoms (study 2)--at low levels of affect perception accuracy, but did not do so at high levels of affect perception accuracy. The findings support a regulatory view of such perceptual abilities. Implications for understanding emotion regulation processes, emotional intelligence, and individual differences in reactivity are discussed.

  14. X-ray Microscopy as an Approach to Increasing Accuracy and Efficiency of Serial Block-face Imaging for Correlated Light and Electron Microscopy of Biological Specimens

    PubMed Central

    Bushong, Eric A.; Johnson, Donald D.; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T.; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H.

    2015-01-01

    The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging. PMID:25392009

  15. X-ray microscopy as an approach to increasing accuracy and efficiency of serial block-face imaging for correlated light and electron microscopy of biological specimens.

    PubMed

    Bushong, Eric A; Johnson, Donald D; Kim, Keun-Young; Terada, Masako; Hatori, Megumi; Peltier, Steven T; Panda, Satchidananda; Merkle, Arno; Ellisman, Mark H

    2015-02-01

    The recently developed three-dimensional electron microscopic (EM) method of serial block-face scanning electron microscopy (SBEM) has rapidly established itself as a powerful imaging approach. Volume EM imaging with this scanning electron microscopy (SEM) method requires intense staining of biological specimens with heavy metals to allow sufficient back-scatter electron signal and also to render specimens sufficiently conductive to control charging artifacts. These more extreme heavy metal staining protocols render specimens light opaque and make it much more difficult to track and identify regions of interest (ROIs) for the SBEM imaging process than for a typical thin section transmission electron microscopy correlative light and electron microscopy study. We present a strategy employing X-ray microscopy (XRM) both for tracking ROIs and for increasing the efficiency of the workflow used for typical projects undertaken with SBEM. XRM was found to reveal an impressive level of detail in tissue heavily stained for SBEM imaging, allowing for the identification of tissue landmarks that can be subsequently used to guide data collection in the SEM. Furthermore, specific labeling of individual cells using diaminobenzidine is detectable in XRM volumes. We demonstrate that tungsten carbide particles or upconverting nanophosphor particles can be used as fiducial markers to further increase the precision and efficiency of SBEM imaging.

  16. Increasing Liability Premiums in Obstetrics – Analysis, Effects and Options

    PubMed Central

    Soergel, P.; Schöffski, O.; Hillemanns, P.; Hille-Betz, U.; Kundu, S.

    2015-01-01

    Whenever people act, mistakes are made. In Germany, it is thought that a total of 40 000 cases of malpractice occur per year. In recent years, costs for liability insurance have risen significantly in almost all spheres of medicine as a whole. Liability in the health care sector is founded on the contractual relationship between doctor and patient. Most recently, case law developed over many years has been codified with the Patientsʼ Rights Act. In obstetrics, the focus of liability law is on brain damage caused by hypoxia or ischemia as a result of management errors during birth. The costs per claim are made up of various components together with different shares of damage costs (increased needs, in particular therapy costs and nursing fees, acquisition damage, treatment costs, compensation). In obstetrics in particular, recent focus has been on massively increased liability payments, also accompanied by higher liability premiums. This causes considerable financial burdens on hospitals as well as on midwives and attending physicians. The premiums are so high, especially for midwives and attending physicians, that professional practice becomes uneconomical in some cases. In recent years, these circumstances have also been intensely debated in the public sphere and in politics. However, the focus here is on the occupation of midwife. In 2014, in the GKV-FQWG (Statutory Health Insurance – Quality and Further Development Act), a subsidy towards the occupational liability premium was defined for midwives who only attended a few deliveries. However, to date, a complete solution to the problem has not been found. A birth will never be a fully controllable risk, but in rare cases will always end with injury to the child. The goal must be to minimise this risk, through good education and continuous training, as well as constant critical analysis of oneʼs own activities. Furthermore, it seems sensible, especially in non-clinical Obstetrics, to look at the current

  17. Minimally invasive measurement of cardiac output during surgery and critical care: a meta-analysis of accuracy and precision.

    PubMed

    Peyton, Philip J; Chong, Simon W

    2010-11-01

    When assessing the accuracy and precision of a new technique for cardiac output measurement, the commonly quoted criterion for acceptability of agreement with a reference standard is that the percentage error (95% limits of agreement/mean cardiac output) should be 30% or less. We reviewed published data on four different minimally invasive methods adapted for use during surgery and critical care: pulse contour techniques, esophageal Doppler, partial carbon dioxide rebreathing, and transthoracic bioimpedance, to assess their bias, precision, and percentage error in agreement with thermodilution. An English language literature search identified published papers since 2000 which examined the agreement in adult patients between bolus thermodilution and each method. For each method a meta-analysis was done using studies in which the first measurement point for each patient could be identified, to obtain a pooled mean bias, precision, and percentage error weighted according to the number of measurements in each study. Forty-seven studies were identified as suitable for inclusion: N studies, n measurements: mean weighted bias [precision, percentage error] were: pulse contour N = 24, n = 714: -0.00 l/min [1.22 l/min, 41.3%]; esophageal Doppler N = 2, n = 57: -0.77 l/min [1.07 l/min, 42.1%]; partial carbon dioxide rebreathing N = 8, n = 167: -0.05 l/min [1.12 l/min, 44.5%]; transthoracic bioimpedance N = 13, n = 435: -0.10 l/min [1.14 l/min, 42.9%]. None of the four methods has achieved agreement with bolus thermodilution which meets the expected 30% limits. The relevance in clinical practice of these arbitrary limits should be reassessed.

  18. A Comparative Analysis of Diagnostic Accuracy of Focused Assessment With Sonography for Trauma Performed by Emergency Medicine and Radiology Residents

    PubMed Central

    Zamani, Majid; Masoumi, Babak; Esmailian, Mehrdad; Habibi, Amin; Khazaei, Mehdi; Mohammadi Esfahani, Mohammad

    2015-01-01

    Background: Focused assessment with sonography in trauma (FAST) is a method for prompt detection of the abdominal free fluid in patients with abdominal trauma. Objectives: This study was conducted to compare the diagnostic accuracy of FAST performed by emergency medicine residents (EMR) and radiology residents (RRs) in detecting peritoneal free fluids. Patients and Methods: Patients triaged in the emergency department with blunt abdominal trauma, high energy trauma, and multiple traumas underwent a FAST examination by EMRs and RRs with the same techniques to obtain the standard views. Ultrasound findings for free fluid in peritoneal cavity for each patient (positive/negative) were compared with the results of computed tomography, operative exploration, or observation as the final outcome. Results: A total of 138 patients were included in the final analysis. Good diagnostic agreement was noted between the results of FAST scans performed by EMRs and RRs (κ = 0.701, P < 0.001), also between the results of EMRs-performed FAST and the final outcome (κ = 0.830, P < 0.0010), and finally between the results of RRs-performed FAST and final outcome (κ = 0.795, P < 0.001). No significant differences were noted between EMRs- and RRs-performed FASTs regarding sensitivity (84.6% vs 84.6%), specificity (98.4% vs 97.6%), positive predictive value (84.6% vs 84.6%), and negative predictive value (98.4% vs 98.4%). Conclusions: Trained EMRs like their fellow RRs have the ability to perform FAST scan with high diagnostic value in patients with blunt abdominal trauma. PMID:26756009

  19. A meta-analysis of the diagnostic accuracy of dengue virus-specific IgA antibody-based tests for detection of dengue infection.

    PubMed

    Alagarasu, K; Walimbe, A M; Jadhav, S M; Deoshatwar, A R

    2016-03-01

    Immunoglobulin A (IgA)-based tests have been evaluated in different studies for their utility in diagnosing dengue infections. In most of the studies, the results were inconclusive because of a small sample size. Hence, a meta-analysis involving nine studies with 2096 samples was performed to assess the diagnostic accuracy of IgA-based tests in diagnosing dengue infections. The analysis was conducted using Meta-Disc software. The results revealed that IgA-based tests had an overall sensitivity, specificity, diagnostic odds ratio, and positive and negative likelihood ratios of 73·9%, 95·2%, 66·7, 22·0 and 0·25, respectively. Significant heterogeneity was observed between the studies. The type of test, infection status and day of sample collection influenced the diagnostic accuracy. The IgA-based diagnostic tests showed a greater accuracy when the samples were collected 4 days after onset of symptoms and for secondary infections. The results suggested that IgA-based tests had a moderate level of accuracy and are diagnostic of the disease. However, negative results cannot be used alone for dengue diagnosis. More prospective studies comparing the diagnostic accuracy of combinations of antigen-based tests with either IgA or IgM are needed and might be useful for suggesting the best strategy for dengue diagnosis.

  20. Performance of alternative strategies for primary cervical cancer screening in sub-Saharan Africa: systematic review and meta-analysis of diagnostic test accuracy studies

    PubMed Central

    Combescure, Christophe; Fokom-Defo, Victoire; Tebeu, Pierre Marie; Vassilakos, Pierre; Kengne, André Pascal; Petignat, Patrick

    2015-01-01

    Objective To assess and compare the accuracy of visual inspection with acetic acid (VIA), visual inspection with Lugol’s iodine (VILI), and human papillomavirus (HPV) testing as alternative standalone methods for primary cervical cancer screening in sub-Saharan Africa. Design Systematic review and meta-analysis of diagnostic test accuracy studies. Data sources Systematic searches of multiple databases including Medline, Embase, and Scopus for studies published between January 1994 and June 2014. Review methods Inclusion criteria for studies were: alternative methods to cytology used as a standalone test for primary screening; study population not at particular risk of cervical cancer (excluding studies focusing on HIV positive women or women with gynaecological symptoms); women screened by nurses; reference test (colposcopy and directed biopsies) performed at least in women with positive screening results. Two reviewers independently screened studies for eligibility and extracted data for inclusion, and evaluated study quality using the quality assessment of diagnostic accuracy studies 2 (QUADAS-2) checklist. Primary outcomes were absolute accuracy measures (sensitivity and specificity) of screening tests to detect cervical intraepithelial neoplasia grade 2 or worse (CIN2+). Results 15 studies of moderate quality were included (n=61 381 for VIA, n=46 435 for VILI, n=11 322 for HPV testing). Prevalence of CIN2+ did not vary by screening test and ranged from 2.3% (95% confidence interval 1.5% to 3.3%) in VILI studies to 4.9% (2.7% to 7.8%) in HPV testing studies. Positivity rates of VILI, VIA, and HPV testing were 16.5% (9.8% to 24.7%), 16.8% (11.0% to 23.6%), and 25.8% (17.4% to 35.3%), respectively. Pooled sensitivity was higher for VILI (95.1%; 90.1% to 97.7%) than VIA (82.4%; 76.3% to 87.3%) in studies where the reference test was performed in all women (P<0.001). Pooled specificity of VILI and VIA were similar (87.2% (78.1% to 92.8%) v 87.4% (77.1% to 93

  1. Bench study of the accuracy of a commercial AED arrhythmia analysis algorithm in the presence of electromagnetic interferences.

    PubMed

    Jekova, Irena; Krasteva, Vessela; Ménétré, Sarah; Stoyanov, Todor; Christov, Ivaylo; Fleischhackl, Roman; Schmid, Johann-Jakob; Didon, Jean-Philippe

    2009-07-01

    This paper presents a bench study on a commercial automated external defibrillator (AED). The objective was to evaluate the performance of the defibrillation advisory system and its robustness against electromagnetic interferences (EMI) with central frequencies of 16.7, 50 and 60 Hz. The shock advisory system uses two 50 and 60 Hz band-pass filters, an adaptive filter to identify and suppress 16.7 Hz interference, and a software technique for arrhythmia analysis based on morphology and frequency ECG parameters. The testing process includes noise-free ECG strips from the internationally recognized MIT-VFDB ECG database that were superimposed with simulated EMI artifacts and supplied to the shock advisory system embedded in a real AED. Measurements under special consideration of the allowed variation of EMI frequency (15.7-17.4, 47-52, 58-62 Hz) and amplitude (1 and 8 mV) were performed to optimize external validity. The accuracy was reported using the American Heart Association (AHA) recommendations for arrhythmia analysis performance. In the case of artifact-free signals, the AHA performance goals were exceeded for both sensitivity and specificity: 99% for ventricular fibrillation (VF), 98% for rapid ventricular tachycardia (VT), 90% for slow VT, 100% for normal sinus rhythm, 100% for asystole and 99% for other non-shockable rhythms. In the presence of EMI, the specificity for some non-shockable rhythms (NSR, N) may be affected in some specific cases of a low signal-to-noise ratio and extreme frequencies, leading to a drop in the specificity with no more than 7% point. The specificity for asystole and the sensitivity for VF and rapid VT in the presence of any kind of 16.7, 50 or 60 Hz EMI simulated artifact were shown to reach the equivalence of sensitivity required for non-noisy signals. In conclusion, we proved that the shock advisory system working in a real AED operates accurately according to the AHA recommendations without artifacts and in the presence of EMI

  2. Spontaneous Subarachnoid Hemorrhage: A Systematic Review and Meta-Analysis Describing the Diagnostic Accuracy of History, Physical Exam, Imaging, and Lumbar Puncture with an Exploration of Test Thresholds

    PubMed Central

    Carpenter, Christopher R.; Hussain, Adnan M.; Ward, Michael J.; Zipfel, Gregory J.; Fowler, Susan; Pines, Jesse M.; Sivilotti, Marco L.A.

    2016-01-01

    Background Spontaneous subarachnoid hemorrhage (SAH) is a rare, but serious etiology of headache. The diagnosis of SAH is especially challenging in alert, neurologically intact patients, as missed or delayed diagnosis can be catastrophic. Objectives To perform a diagnostic accuracy systematic review and meta-analysis of history, physical examination, cerebrospinal fluid (CSF) tests, computed tomography (CT), and clinical decision rules for spontaneous SAH. A secondary objective was to delineate probability of disease thresholds for imaging and lumbar puncture (LP). Methods PUBMED, EMBASE, SCOPUS, and research meeting abstracts were searched up to June 2015 for studies of emergency department (ED) patients with acute headache clinically concerning for spontaneous SAH. QUADAS-2 was used to assess study quality and, when appropriate, meta-analysis was conducted using random effects models. Outcomes were sensitivity, specificity, positive (LR+) and negative (LR−) likelihood ratios. To identify test- and treatment-thresholds, we employed the Pauker-Kassirer method with Bernstein test-indication curves using the summary estimates of diagnostic accuracy. Results A total of 5,022 publications were identified, of which 122 underwent full text-review; 22 studies were included (average SAH prevalence 7.5%). Diagnostic studies differed in assessment of history and physical exam findings, CT technology, analytical techniques used to identify xanthochromia, and criterion standards for SAH. Study quality by QUADAS-2 was variable; however, most had a relatively low-risk of biases. A history of neck pain (LR+ 4.1 [95% CI 2.2-7.6]) and neck stiffness on physical exam (LR+ 6.6 [4.0-11.0]) were the individual findings most strongly associated with SAH. Combinations of findings may rule out SAH, yet promising clinical decision rules await external validation. Non-contrast cranial CT within 6 hours of headache onset accurately ruled-in (LR+ 230 [6-8700]) and ruled-out SAH (LR− 0

  3. Novel Genetic Analysis for Case-Control Genome-Wide Association Studies: Quantification of Power and Genomic Prediction Accuracy

    PubMed Central

    Lee, Sang Hong; Wray, Naomi R.

    2013-01-01

    Genome-wide association studies (GWAS) are routinely conducted for both quantitative and binary (disease) traits. We present two analytical tools for use in the experimental design of GWAS. Firstly, we present power calculations quantifying power in a unified framework for a range of scenarios. In this context we consider the utility of quantitative scores (e.g. endophenotypes) that may be available on cases only or both cases and controls. Secondly, we consider, the accuracy of prediction of genetic risk from genome-wide SNPs and derive an expression for genomic prediction accuracy using a liability threshold model for disease traits in a case-control design. The expected values based on our derived equations for both power and prediction accuracy agree well with observed estimates from simulations. PMID:23977056

  4. The measurement accuracy of passive radon instruments.

    PubMed

    Beck, T R; Foerster, E; Buchröder, H; Schmidt, V; Döring, J

    2014-01-01

    This paper analyses the data having been gathered from interlaboratory comparisons of passive radon instruments over 10 y with respect to the measurement accuracy. The measurement accuracy is discussed in terms of the systematic and the random measurement error. The analysis shows that the systematic measurement error of the most instruments issued by professional laboratory services can be within a range of ±10 % from the true value. A single radon measurement has an additional random measurement error, which is in the range of up to ±15 % for high exposures to radon (>2000 kBq h m(-3)). The random measurement error increases for lower exposures. The analysis especially applies to instruments with solid-state nuclear track detectors and results in proposing criteria for testing the measurement accuracy. Instruments with electrets and charcoal have also been considered, but the low stock of data enables only a qualitative discussion.

  5. Diagnostic accuracy of refractometer and Brix refractometer to assess failure of passive transfer in calves: protocol for a systematic review and meta-analysis.

    PubMed

    Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M

    2016-06-01

    Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy.

  6. An accuracy analysis of the front tracking method and interface capturing methods for the solution of heat transfer problems with phase changes

    NASA Astrophysics Data System (ADS)

    Klimeš, Lubomír; Mauder, Tomáš; Charvát, Pavel; Štétina, Josef

    2016-09-01

    Materials undergoing a phase change have a number of applications in practice and engineering. Computer simulation tools are often used for investigation of such heat transfer problems with phase changes since they are fast and relatively not expensive. However, a crucial issue is the accuracy of these simulation tools. Numerical methods from the interface capturing category are frequently applied. Such approaches, however, allow for only approximate tracking of the interface between the phases. The paper presents an accuracy analysis and comparison of two widely used interface capturing methods—the enthalpy and the effective heat capacity methods—with the front tracking algorithm. A paraffin-based phase change material is assumed in the study. Computational results show that the front tracking algorithm provides a significantly higher accuracy level than the considered interface capturing methods.

  7. Increased Accuracy in Statistical Seasonal Hurricane Forecasting

    NASA Astrophysics Data System (ADS)

    Nateghi, R.; Quiring, S. M.; Guikema, S. D.

    2012-12-01

    Hurricanes are among the costliest and most destructive natural hazards in the U.S. Accurate hurricane forecasts are crucial to optimal preparedness and mitigation decisions in the U.S. where 50 percent of the population lives within 50 miles of the coast. We developed a flexible statistical approach to forecast annual number of hurricanes in the Atlantic region during the hurricane season. Our model is based on the method of Random Forest and captures the complex relationship between hurricane activity and climatic conditions through careful variable selection, model testing and validation. We used the National Hurricane Center's Best Track hurricane data from 1949-2011 and sixty-one candidate climate descriptors to develop our model. The model includes information prior to the hurricane season, i.e., from the last three months of the previous year (Oct. through Dec.) and the first five months of the current year (January through May). Our forecast errors are substantially lower than other leading forecasts such as that of the National Oceanic and Atmospheric Administration (NOAA).

  8. How Does One Assess the Accuracy of Academic Success Predictors? ROC Analysis Applied to University Entrance Factors

    ERIC Educational Resources Information Center

    Vivo, Juana-Maria; Franco, Manuel

    2008-01-01

    This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…

  9. Diagnostic Accuracy of the Child Behavior Checklist Scales for Attention-Deficit Hyperactivity Disorder: A Receiver-Operating Characteristic Analysis.

    ERIC Educational Resources Information Center

    Chen, Wei J.; And Others

    1994-01-01

    Examined diagnostic accuracy of Child Behavior Checklist (CBCL) scales for attention-deficit hyperactivity disorder (ADHD). Estimated 3 logistic regression models in 121 children with and without ADHD, then tested models in cross-validation sample (n=122) and among 219 siblings of samples. In all four groups, CBCL Attention Problems scale had…

  10. The diagnostic accuracy of X-ray arthrography for triangular fibrocartilaginous complex injury: a systematic review and meta-analysis.

    PubMed

    Smith, T O; Drew, B T; Toms, A P; Chojnowski, A J

    2012-11-01

    The purpose of this study was to evaluate the diagnostic test accuracy of X-ray arthrography in the detection of TFCC tear. Both published and unpublished databases were searched from their inception to August 2010. All studies comparing the diagnostic accuracy of X-ray arthrography (index test) to arthroscopy (reference standard) for patients with suspected TFCC tears were included in this review. Twelve studies assessing 430 patients (430 wrists) satisfied the eligibility criteria and were included. X-ray arthrography presented with a pooled sensitivity of 76.2% and specificity of 92.5% for the detection of complete TFCC tear. The triple-compartment injection X-ray arthrography was superior to the single-compartment injection technique. To conclude, the diagnostic test accuracy of X-ray arthrography is limited. Neither the single- nor the triple-compartment injection arthrography method is acceptable, given their reported low sensitivities. Further evaluation of the diagnostic test accuracy of Magnetic Resonance Arthrography and Magnetic Resonance Imaging is therefore warranted.

  11. Classification accuracy analysis of selected land use and land cover products in a portion of West-Central Lower Michigan

    NASA Astrophysics Data System (ADS)

    Ma, Kin Man

    2007-12-01

    Remote sensing satellites have been utilized to characterize and map land cover and its changes since the 1970s. However, uncertainties exist in almost all land use and land cover maps classified from remotely sensed images. In particular, it has been recognized that the spatial mis-registration of land cover maps can affect the true estimates of land use/land cover (LULC) changes. This dissertation addressed the following questions: what are the spatial patterns, magnitudes, and cover-dependencies of classification uncertainty associated with West-Central Lower Michigan's LULC products and how can the adverse effects of spatial misregistration on accuracy assessment be reduced? Two Michigan LULC products were chosen for comparison: 1998 Muskegon River Watershed (MRW) Michigan Resource Information Systems LULC map and a 2001 Integrated Forest Monitoring and Assessment Prescription Project (IFMAP). The 1m resolution 1998 MRW LULC map was derived from U.S. Geological Survey Digital Orthophoto Quarter Quadrangle (USGS DOQQs) color infrared imagery and was used as the reference map, since it has a thematic accuracy of 95%. The IFMAP LULC map was co-registered to a series of selected 1998 USGS DOQQs. The total combined root mean square error (rmse) distance of the georectified 2001 IFMAP was +/-12.20m. A spatial uncertainty buffer of at least 1.5 times the rmse was set at 20m so that polygon core areas would be unaffected by spatial misregistration noise. A new spatial misregistration buffer protocol (SPATIALM_ BUFFER) was developed to limit the effect of spatial misregistration on classification accuracy assessment. Spatial uncertainty buffer zones of 20m were generated around LULC polygons of both datasets. Eight-hundred seventeen (817) stratified random accuracy assessment points (AAPs) were generated across the 1998 MRW map. Classification accuracy and kappa statistics were generated for both the 817 AAPs and 604 AAPs comparisons. For the 817 AAPs comparison, the

  12. How Can We Evaluate the Accuracy of Small Stream Maps? -Focusing on Sampling Method and Statistical Analysis -

    NASA Astrophysics Data System (ADS)

    Park, J.

    2010-12-01

    The Washington State Department of Natural Resources’ (DNR) Forest Practices Habitat Conservation Plan (FPHCP) requires establishment of riparian management zones (RMZs) or equipment limitation zones (ELZs). In order to establish RMZs and ELZs, the DNR is required to update GIS-based stream maps showing the locations of type Ns (Non-fish seasonal) streams as well as type S (Shorelines of the state), type F (Fish habitat), and type Np (Non-fish perennial) streams. While there are few disputes over the positional accuracy of large streams, the representation of small streams such as Ns and small type S or F streams (less than 10’ width) have been considered to need more improvement of their positional accuracy. Numerous remotely sensed stream-mapping methods have been developed in the last several decades that use an array of remote sensing data such as aerial photography, satellite optical imagery, and Digital Elevation Model (DEM) topographic data. While the positional accuracy of the final stream map products has been considered essential to determine the map quality, the estimation or comparison of the positional accuracy of small stream map products has not been well studied, and rarely attempted by remotely sensed stream map developers. Assessments of the positional accuracy of stream maps are not covered properly because it is not easy to acquire the field reference data, especially for small streams under the canopy located in remote forest areas. More importantly, as of this writing, we are not aware of any prominent method to estimate or compare the positional accuracy of stream maps. Since general positional accuracy assessment methods for remotely sensed map products are designed for at least two dimensional features, they are not suitable for linear features such as streams. Due to the difficulties inherent in stream features, estimation methods for stream maps' accuracy have not dealt with the positional accuracy itself but the hydrological

  13. Diagnostic Accuracy of a Molecular Drug Susceptibility Testing Method for the Antituberculosis Drug Ethambutol: a Systematic Review and Meta-Analysis

    PubMed Central

    Cheng, Song; Cui, Zhenling; Li, Yuanyuan

    2014-01-01

    Ethambutol (EMB) is a first-line antituberculosis drug; however, drug resistance to EMB has been increasing. Molecular drug susceptibility testing (DST), based on the embB gene, has recently been used for rapid identification of EMB resistance. The aim of this meta-analysis was to establish the accuracy of molecular assay for detecting drug resistance to EMB. PubMed, Embase, and Web of Science were searched according to a written protocol and explicit study selection criteria. Measures of diagnostic accuracy were pooled using a random effects model. A total of 34 studies were included in the meta-analysis. The respective pooled sensitivities and specificities were 0.57 and 0.93 for PCR-DNA sequencing that targeted the embB 306 codon, 0.76 and 0.89 for PCR-DNA sequencing that targeted the embB 306, 406, and 497 codons, 0.64 and 0.70 for detecting Mycobacterium tuberculosis isolates, 0.55 and 0.78 for detecting M. tuberculosis sputum specimens using the GenoType MTBDRsl test, 0.57 and 0.87 for pyrosequencing, and 0.35 and 0.98 for PCR-restriction fragment length polymorphism. The respective pooled sensitivities and specificities were 0.55 and 0.92 when using a lower EMB concentration as the reference standard, 0.67 and 0.73 when using a higher EMB concentration as the reference standard, and 0.60 and 1.0 when using multiple reference standards. PCR-DNA sequencing using multiple sites of the embB gene as detection targets, including embB 306, 406, and 497, can be a rapid method for preliminarily screening for EMB resistance, but it does not fully replace phenotypic DST. Of the reference DST methods examined, the agreement rates were the best using MGIT 960 for molecular DST and using the proportion method on Middlebrook 7H10 media. PMID:24899018

  14. The Effect of Study Design Biases on the Diagnostic Accuracy of Magnetic Resonance Imaging to Detect Silicone Breast Implant Ruptures: A Meta-Analysis

    PubMed Central

    Song, Jae W.; Kim, Hyungjin Myra; Bellfi, Lillian T.; Chung, Kevin C.

    2010-01-01

    Background All silicone breast implant recipients are recommended by the US Food and Drug Administration to undergo serial screening to detect implant rupture with magnetic resonance imaging (MRI). We performed a systematic review of the literature to assess the quality of diagnostic accuracy studies utilizing MRI or ultrasound to detect silicone breast implant rupture and conducted a meta-analysis to examine the effect of study design biases on the estimation of MRI diagnostic accuracy measures. Method Studies investigating the diagnostic accuracy of MRI and ultrasound in evaluating ruptured silicone breast implants were identified using MEDLINE, EMBASE, ISI Web of Science, and Cochrane library databases. Two reviewers independently screened potential studies for inclusion and extracted data. Study design biases were assessed using the QUADAS tool and the STARDS checklist. Meta-analyses estimated the influence of biases on diagnostic odds ratios. Results Among 1175 identified articles, 21 met the inclusion criteria. Most studies using MRI (n= 10 of 16) and ultrasound (n=10 of 13) examined symptomatic subjects. Meta-analyses revealed that MRI studies evaluating symptomatic subjects had 14-fold higher diagnostic accuracy estimates compared to studies using an asymptomatic sample (RDOR 13.8; 95% CI 1.83–104.6) and 2-fold higher diagnostic accuracy estimates compared to studies using a screening sample (RDOR 1.89; 95% CI 0.05–75.7). Conclusion Many of the published studies utilizing MRI or ultrasound to detect silicone breast implant rupture are flawed with methodological biases. These methodological shortcomings may result in overestimated MRI diagnostic accuracy measures and should be interpreted with caution when applying the data to a screening population. PMID:21364405

  15. Analysis of “Accuracy Evaluation of Five Blood Glucose Monitoring Systems: The North American Comparator Trial”

    PubMed Central

    Fournier, Paul A.

    2013-01-01

    In an article in Journal of Diabetes Science and Technology, Halldorsdottir and coauthors examined the accuracy of five blood glucose monitoring systems (BGMSs) in a study sponsored by the manufacturer of the BGMS CONTOUR NEXT EZ (EZ) and found that this BGMS was the most accurate one. However, their findings must be viewed critically given that one of the BGMSs (ACCU-CHEK Aviva) was not compared against the reference measurement specified by its manufacturer, thus making it likely that it performed suboptimally. Also, the accuracy of the glucose-oxidase-based ONE TOUCH Ultra2 and TRUEtrack BGMS is likely to have been underestimated because of the expected low oxygen level in the glycolysed blood samples used to test the performance of these BGMSs under hypoglycemic conditions. In conclusion, although this study shows that EZ is an accurate BGMS, comparisons between this and other BGMSs should be interpreted with caution. PMID:24124958

  16. The analysis of the accuracy of the wheel alignment inspection method on the side-slip plate stand

    NASA Astrophysics Data System (ADS)

    Gajek, A.; Strzępek, P.

    2016-09-01

    The article presents the theoretical basis and the results of the examination of the wheel alignment inspection method on the slide slip plate stand. It is obligatory test during periodic technical inspection of the vehicle. The measurement is executed in the dynamic conditions. The dependence between the lateral displacement of the plate and toe-in of the tested wheels has been shown. If the diameter of the wheel rim is known then the value of the toe-in can be calculated. The comparison of the toe-in measurements on the plate stand and on the four heads device for the wheel alignment inspection has been carried out. The accuracy of the measurements and the influence of the conditions of the tests on the plate stand (the way of passing through the plate) were estimated. The conclusions about the accuracy of this method are presented.

  17. Sensitivity and accuracy analysis of CT image in PRISM autocontouring using confusion matrix and ROC/AUC curve methods

    NASA Astrophysics Data System (ADS)

    Yunisa, Regina; Haryanto, Freddy

    2015-09-01

    The research was conducted to evaluate and analyze the results of the CT image autocontouring Prism TPS using confusion matrix and ROC methods. This study begins by treating thoracic CT images using a grayscale format TPS Prism software. Autocontouring done in the area of spinal cord and right lung with appropriate parameter settings window. The average value of the sensitivity, specificity and accuracy for 23 slices of spinal cord are 0.93, 0.99, and 0.99. For two slices of the right lung, average value of sensitivity, specificity, and accuracy of 2 slices were 0.99, 0.99, and 0.99. These values are classified as `Excellent'.

  18. Increasing the performance of tritium analysis by electrolytic enrichment.

    PubMed

    Groning, M; Auer, R; Brummer, D; Jaklitsch, M; Sambandam, C; Tanweer, A; Tatzber, H

    2009-06-01

    Several improvements are described for the existing tritium enrichment system at the Isotope Hydrology Laboratory of the International Atomic Energy Agency for processing natural water samples. The improvements include a simple method for pretreatment of electrolytic cells to ensure a high tritium separation factor, an improved design of the exhaust system for explosive gases, and a vacuum distillation line for faster initial preparation of water samples for electrolytic enrichment and for tritium analysis. Achievements included the reduction of variation of individual enrichment parameters of all cells to less than 1% and an improvement of 50% of the stability of the background mean. It resulted in an improved detection limit of less than 0.4 TU (at 2s), important for application of tritium measurements in the future at low concentration levels, and resulted in measurement precisions of+/-0.2 TU and+/-0.15 TU for liquid scintillation counting and for gas proportional counting, respectively.

  19. Accuracy of the ABC/2 score for intracerebral hemorrhage: Systematic review and analysis of MISTIE, CLEAR-IVH, CLEAR III

    PubMed Central

    Webb, Alastair JS; Ullman, Natalie L; Morgan, Tim C; Muschelli, John; Kornbluth, Joshua; Awad, Issam A; Mayo, Stephen; Rosenblum, Michael; Ziai, Wendy; Zuccarrello, Mario; Aldrich, Francois; John, Sayona; Harnof, Sagi; Lopez, George; Broaddus, William C; Wijman, Christine; Vespa, Paul; Bullock, Ross; Haines, Stephen J; Cruz-Flores, Salvador; Tuhrim, Stan; Hill, Michael D; Narayan, Raj; Hanley, Daniel F

    2015-01-01

    Background and Purpose The ABC/2 score estimates intracerebral hemorrhage (ICH) volume, yet validations have been limited by small samples and inappropriate outcome measures. We determined accuracy of the ABC/2 score calculated at a specialized Reading Center (RC-ABC) or local site (site-ABC) versus the reference-standard CT-based planimetry (CTP). Methods In MISTIE-II, CLEAR-IVH and CLEAR-III trials, ICH volume was prospectively calculated by CTP, RC-ABC and site-ABC. Agreement between CTP and ABC/2 was defined as an absolute difference up to 5ml and relative difference within 20%. Determinants of ABC/2 accuracy were assessed by logistic regression. Results In 4369 scans from 507 patients, CTP was more strongly correlated with RC-ABC (r2=0.93) than site-ABC (r2=0.87). Although RC-ABC overestimated CTP-based volume on average (RC-ABC=15.2cm3, CTP=12.7cm3), agreement was reasonable when categorised into mild, moderate and severe ICH (kappa 0.75, p<0.001). This was consistent with overestimation of ICH volume in 6/8 previous studies. Agreement with CTP was greater for RC-ABC (84% within 5ml; 48% of scans within 20%) than for site-ABC (81% within 5ml; 41% within 20%). RC-ABC had moderate accuracy for detecting ≥ 5ml change in CTP volume between consecutive scans (sensitivity 0.76, specificity 0.86) and was more accurate with smaller ICH, thalamic haemorrhage and homogeneous clots. Conclusions ABC/2 scores at local or central sites are sufficiently accurate to categorise ICH volume and assess eligibility for the CLEAR III and MISTIE III studies, and moderately accurate for change in ICH volume. However, accuracy decreases with large, irregular or lobar clots. Clinical Trial Registration MISTIE-II NCT00224770; CLEAR-III NCT00784134; www.clinicaltrials.gov PMID:26243227

  20. An Analysis of Debris Orbit Prediction Accuracy from Short-arc Orbit Determination Using Optical and Laser Tracking Data

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Sang, J.; Smith, C.; Zhang, K.

    2014-09-01

    In this paper results are presented from a short-arc orbit determination study using optical and laser tracking data from the Space Debris Tracking System located at Mount Stromlo, Australia. Fifteen low-Earth orbit debris objects were considered in the study with perigee altitudes in the range 550850 km. In most cases, a 2-day orbit determination was considered using 2 passes of optical and 2 passes of laser tracking data. A total of 33 1-day and 26 2-day orbit prediction cases were compared with residuals obtained by comparing the orbit prediction with subsequent tracking data. A comparison was made between the orbit prediction accuracies for 2 orbit determination variants: (1) Entire passes are used during the orbit determination process; (2) Only 5 seconds is used from the beginning of each pass. Overall, the short-arc orbit determination results in (slightly) worse 1 and 2 day orbit prediction accuracies when compared to using the full observation arcs; however, the savings in tracking load outweighs the reduction in accuracy. If the optical or laser data is left out of the 5-second pass orbit determination process, most cases diverged which shows the importance of 3-dimenional positioning. Two-line element data was used to constrain the orbit determination process resulting in better convergence rates, but the resulting orbit prediction accuracy was much worse. The results have important implications for an optical and laser debris tracking network with potential savings in tracking load. An experimental study will be needed to verify this statement.

  1. On the Accuracy of Flechettes by Dynamic Wind Tunnel Tests, by Theory and Analysis, and by Actual Firings

    DTIC Science & Technology

    1975-01-01

    DISPOSITION INSTRUCTIONS Destroy this report when it is no longer needed. Do not return it to the originator. The findings in this report are not to be...The accuracy and dispersion of flechettes are investigated 1) by an exploratory firing program, 2) by a supersonic dynamic testing wind tunnel program...Firing Program are summarized in Appendix A. A dynamic wind tunnel testing program was also carried out by the university on various flechette designs

  2. Accuracy Analysis of a Multi-View Stereo Approach for Phenotyping of Tomato Plants at the Organ Level

    PubMed Central

    Rose, Johann Christian; Paulus, Stefan; Kuhlmann, Heiner

    2015-01-01

    Accessing a plant's 3D geometry has become of significant importance for phenotyping during the last few years. Close-up laser scanning is an established method to acquire 3D plant shapes in real time with high detail, but it is stationary and has high investment costs. 3D reconstruction from images using structure from motion (SfM) and multi-view stereo (MVS) is a flexible cost-effective method, but requires post-processing procedures. The aim of this study is to evaluate the potential measuring accuracy of an SfM- and MVS-based photogrammetric method for the task of organ-level plant phenotyping. For this, reference data are provided by a high-accuracy close-up laser scanner. Using both methods, point clouds of several tomato plants were reconstructed at six following days. The parameters leaf area, main stem height and convex hull of the complete plant were extracted from the 3D point clouds and compared to the reference data regarding accuracy and correlation. These parameters were chosen regarding the demands of current phenotyping scenarios. The study shows that the photogrammetric approach is highly suitable for the presented monitoring scenario, yielding high correlations to the reference measurements. This cost-effective 3D reconstruction method depicts an alternative to an expensive laser scanner in the studied scenarios with potential for automated procedures. PMID:25919368

  3. Analysis of reliability, accuracy, sensitivity and predictive value of a subjective method to classify facial pattern in adults

    PubMed Central

    Queiroz, Gilberto Vilanova; Rino, José; de Paiva, João Batista; Capelozza, Leopoldino

    2016-01-01

    ABSTRACT Introduction: Craniofacial pattern diagnosis is vital in Orthodontics, as it influences decision-making regarding treatment options and prognosis. Capelozza Filho proposed a subjective method for facial classification comprising five patterns: I, II, III, Long Face and Short Face. Objective: To investigate the accuracy of a subjective classification method of facial patterns applied to adults. Methods: A sample consisting of 52 adults was used for this study. Frontal and lateral view photographs were taken with subjects at rest position, including frontal smile. Lateral cephalometric radiographs were organized in a PowerPoint® presentation and submitted to 20 raters. Method performance was assessed by examining reproducibility with Kappa test and calculating accuracy, sensitivity and positive predictive values, for which 70% was set as critical value. The gold standard of the classification was personally set by the author of the method. Results: Reproducibility was considered moderate (Kappa = 0.501); while accuracy, sensitivity and positive predictive values yielded similar results, but below 70%. Conclusions: The subjective method of facial classification employed in the present study still needs to have its morphological criteria improved in order to be used to discriminate the five facial patterns. PMID:28125141

  4. Comparing the accuracy of video-oculography and the scleral search coil system in human eye movement analysis.

    PubMed

    Imai, Takao; Sekine, Kazunori; Hattori, Kousuke; Takeda, Noriaki; Koizuka, Izumi; Nakamae, Koji; Miura, Katsuyoshi; Fujioka, Hiromu; Kubo, Takeshi

    2005-03-01

    The measurement of eye movements in three dimensions is an important tool to investigate the human vestibular and oculomotor system. The primary methods for three dimensional eye movement measurement are the scleral search coil system (SSCS) and video-oculography (VOG). In the present study, we compare the accuracy of VOG with that of SSCS using an artificial eye. We then analyzed the Y (pitch) and Z (yaw) component of human eye movements during saccades, smooth pursuit and optokinetic nystagmus, and the X (roll) component of human eye movement during the torsional vestibulo-ocular reflex induced by rotation in normal subjects, using simultaneous VOG and SSCS measures. The coefficients of the linear relationship between the angle of a simulated eyeball and the angle measured by both VOG and SSCS was almost unity with y-intercepts close to zero for torsional (X), vertical (Y) and horizontal (Z) movements, indicating that the in vitro accuracy of VOG was similar to that of SSCS. The average difference between VOG and SSCS was 0.56 degrees , 0.78 degrees and 0.18 degrees for the X, Y and Z components of human eye movements, respectively. Both the in vitro and in vivo comparisons demonstrate that VOG has accuracy comparable to SSCS, and is a reliable method for measurement of three dimensions (3D) human eye movements.

  5. Accuracy analysis of a multi-view stereo approach for phenotyping of tomato plants at the organ level.

    PubMed

    Rose, Johann Christian; Paulus, Stefan; Kuhlmann, Heiner

    2015-04-24

    Accessing a plant's 3D geometry has become of significant importance for phenotyping during the last few years. Close-up laser scanning is an established method to acquire 3D plant shapes in real time with high detail, but it is stationary and has high investment costs. 3D reconstruction from images using structure from motion (SfM) and multi-view stereo (MVS) is a flexible cost-effective method, but requires post-processing procedures. The aim of this study is to evaluate the potential measuring accuracy of an SfM- and MVS-based photogrammetric method for the task of organ-level plant phenotyping. For this, reference data are provided by a high-accuracy close-up laser scanner. Using both methods, point clouds of several tomato plants were reconstructed at six following days. The parameters leaf area, main stem height and convex hull of the complete plant were extracted from the 3D point clouds and compared to the reference data regarding accuracy and correlation. These parameters were chosen regarding the demands of current phenotyping scenarios. The study shows that the photogrammetric approach is highly suitable for the presented monitoring scenario, yielding high correlations to the reference measurements. This cost-effective 3D reconstruction method depicts an alternative to an expensive laser scanner in the studied scenarios with potential for automated procedures.

  6. Comparison of the Accuracy of Two Conventional Phenotypic Methods and Two MALDI-TOF MS Systems with That of DNA Sequencing Analysis for Correctly Identifying Clinically Encountered Yeasts

    PubMed Central

    Chao, Qiao-Ting; Lee, Tai-Fen; Teng, Shih-Hua; Peng, Li-Yun; Chen, Ping-Hung; Teng, Lee-Jene; Hsueh, Po-Ren

    2014-01-01

    We assessed the accuracy of species-level identification of two commercially available matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS) systems (Bruker Biotyper and Vitek MS) and two conventional phenotypic methods (Phoenix 100 YBC and Vitek 2 Yeast ID) with that of rDNA gene sequencing analysis among 200 clinical isolates of commonly encountered yeasts. The correct identification rates of the 200 yeast isolates to species or complex (Candida parapsilosis complex, C. guilliermondii complex and C. rugosa complex) levels by the Bruker Biotyper, Vitek MS (using in vitro devices [IVD] database), Phoenix 100 YBC and Vitek 2 Yeast ID (Sabouraud's dextrose agar) systems were 92.5%, 79.5%, 89%, and 74%, respectively. An additional 72 isolates of C. parapsilosis complex and 18 from the above 200 isolates (30 in each of C. parapsilosis, C. metapsilosis, and C. orthopsilosis) were also evaluated separately. Bruker Biotyper system could accurately identify all C. parapsilosis complex to species level. Using Vitek 2 MS (IVD) system, all C. parapsilosis but none of C. metapsilosis, or C. orthopsilosis could be accurately identified. Among the 89 yeasts misidentified by the Vitek 2 MS (IVD) system, 39 (43.8%), including 27 C. orthopsilosis isolates, could be correctly identified Using the Vitek MS Plus SARAMIS database for research use only. This resulted in an increase in the rate of correct identification of all yeast isolates (87.5%) by Vitek 2 MS. The two species in C. guilliermondii complex (C. guilliermondii and C. fermentati) isolates were correctly identified by cluster analysis of spectra generated by the Bruker Biotyper system. Based on the results obtained in the current study, MALDI-TOF MS systems present a promising alternative for the routine identification of yeast species, including clinically commonly and rarely encountered yeast species and several species belonging to C. parapsilosis complex, C. guilliermondii complex

  7. Thiopurine S-methyltransferase testing for averting drug toxicity: a meta-analysis of diagnostic test accuracy

    PubMed Central

    Zur, RM; Roy, LM; Ito, S; Beyene, J; Carew, C; Ungar, WJ

    2016-01-01

    Thiopurine S-methyltransferase (TPMT) deficiency increases the risk of serious adverse events in persons receiving thiopurines. The objective was to synthesize reported sensitivity and specificity of TPMT phenotyping and genotyping using a latent class hierarchical summary receiver operating characteristic meta-analysis. In 27 studies, pooled sensitivity and specificity of phenotyping for deficient individuals was 75.9% (95% credible interval (CrI), 58.3–87.0%) and 98.9% (96.3–100%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 90.4% (79.1–99.4%) and 100.0% (99.9–100%), respectively. For individuals with deficient or intermediate activity, phenotype sensitivity and specificity was 91.3% (86.4–95.5%) and 92.6% (86.5–96.6%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 88.9% (81.6–97.5%) and 99.2% (98.4–99.9%), respectively. Genotyping has higher sensitivity as long as TPMT*2 and TPMT*3 are tested. Both approaches display high specificity. Latent class meta-analysis is a useful method for synthesizing diagnostic test performance data for clinical practice guidelines. PMID:27217052

  8. Diagnostic Accuracy of Transcranial Sonography of the Substantia Nigra in Parkinson’s disease: A Systematic Review and Meta-analysis

    PubMed Central

    Li, Dun-Hui; He, Ya-Chao; Liu, Jun; Chen, Sheng-Di

    2016-01-01

    A large number of articles have reported substantia nigra hyperechogenicity in Parkinson’s disease (PD) and have assessed the diagnostic accuracy of transcranial sonography (TCS); however, the conclusions are discrepant. Consequently, this systematic review and meta-analysis aims to consolidate the available observational studies and provide a comprehensive evaluation of the clinical utility of TCS in PD. Totally, 31 studies containing 4,386 participants from 13 countries were included. A random effects model was utilized to pool the effect sizes. Meta-regression and sensitivity analysis were performed to explore potential heterogeneity. Overall diagnostic accuracy of TCS in differentiating PD from normal controls was quite high, with a pooled sensitivity of 0.83 (95% CI: 0.81–0.85) and a pooled specificity of 0.87 (95% CI: 0.85–0.88). The positive likelihood ratio, the negative likelihood ratio and diagnostic odds ratio were calculated 6.94 (95% CI: 5.09–9.48), 0.19 (95% CI: 0.16–0.23), and 42.89 (95% CI: 30.03–61.25) respectively. Our systematic review of the literature and meta-analysis suggest that TCS has high diagnostic accuracy in the diagnosis of PD when compared to healthy control. PMID:26878893

  9. Local structure analysis of materials for increased energy efficiency

    NASA Astrophysics Data System (ADS)

    Medling, Scott

    In this dissertation, a wide range of materials which exhibit interesting properties with potential for energy efficiency applications are investigated. The bulk of the research was conducted using the Extended X-ray Absorption Fine Structure (EXAFS) technique. EXAFS is a powerful tool for elucidating the local structure of novel materials, and it's advantages are presented in Chapter 2. In Chapter 3, I present details on two new techniques which are used in studies later in this dissertation, but are also promising for other, unrelated studies and, therefore, warrant being discussed generally. I explain the presence of and present a method for subtracting the X-ray Raman background in the fluorescence window when collecting fluorescence EXAFS data of a dilute dopant Z in a Z+1 host. I introduce X-ray magnetic circular dichroism (XMCD) and discuss the process to reduce XMCD data, including the self-absorption corrections for low energy K-edges. In Chapter 4, I present a series of investigations on ZnS:Cu electroluminescent phosphors. Optical microscopy indicates that the emission centers do not degrade uniformly or monotonically, but rather, most of the emission centers blink on and off during degradation. The effect of this on various proposed degradation mechanisms is discussed. EXAFS data of ZnS:Cu phosphors ground to enable thinner, lower-voltage devices indicate that grinding preferentially causes damage to the CuS nanoprecipitates, quenching electroluminescence (EL) and concluding that smaller particles must be built up from nanoparticles instead. EXAFS data of nanoparticles show that adding a ZnS shell outside a ZnS:Cu core provides significant additional encapsulation of the Cu, increasing photoluminescence and indicating that this may increase EL if devices can be fabricated. Data from extremely dilute (0.02% Cu) ZnS:Cu nanoparticles is presented in order to specifically study the non-precipitate and suggests that the Cu dopant substitutes for Zn and is

  10. Effectiveness of Preanalytic Practices on Contamination and Diagnostic Accuracy of Urine Cultures: a Laboratory Medicine Best Practices Systematic Review and Meta-analysis

    PubMed Central

    Franek, Jacob; Leibach, Elizabeth K.; Weissfeld, Alice S.; Kraft, Colleen S.; Sautter, Robert L.; Baselski, Vickie; Rodahl, Debra; Peterson, Edward J.; Cornish, Nancy E.

    2015-01-01

    SUMMARY Background. Urinary tract infection (UTI) in the United States is the most common bacterial infection, and urine cultures often make up the largest portion of workload for a hospital-based microbiology laboratory. Appropriately managing the factors affecting the preanalytic phase of urine culture contributes significantly to the generation of meaningful culture results that ultimately affect patient diagnosis and management. Urine culture contamination can be reduced with proper techniques for urine collection, preservation, storage, and transport, the major factors affecting the preanalytic phase of urine culture. Objectives. The purposes of this review were to identify and evaluate preanalytic practices associated with urine specimens and to assess their impact on the accuracy of urine culture microbiology. Specific practices included collection methods for men, women, and children; preservation of urine samples in boric acid solutions; and the effect of refrigeration on stored urine. Practice efficacy and effectiveness were measured by two parameters: reduction of urine culture contamination and increased accuracy of patient diagnosis. The CDC Laboratory Medicine Best Practices (LMBP) initiative's systematic review method for assessment of quality improvement (QI) practices was employed. Results were then translated into evidence-based practice guidelines. Search strategy. A search of three electronic bibliographic databases (PubMed, SCOPUS, and CINAHL), as well as hand searching of bibliographies from relevant information sources, for English-language articles published between 1965 and 2014 was conducted. Selection criteria. The search contained the following medical subject headings and key text words: urinary tract infections, UTI, urine/analysis, urine/microbiology, urinalysis, specimen handling, preservation, biological, preservation, boric acid, boric acid/borate, refrigeration, storage, time factors, transportation, transport time, time delay

  11. Spectral Accuracy and Sulfur Counting Capabilities of the LTQ-FT-ICR and the LTQ-Orbitrap XL for Small Molecule Analysis

    NASA Astrophysics Data System (ADS)

    Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.

    2011-12-01

    Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.

  12. Application of a normalized Nash-Sutcliffe efficiency to improve the accuracy of the Sobol' sensitivity analysis of a hydrological model

    NASA Astrophysics Data System (ADS)

    Nossent, J.; Bauwens, W.

    2012-04-01

    , oi is the observed value on day i and o is the average of the observations. As for the regular NSE, 1 is the optimal value for the NNSE. On the other hand, a value of 0.5 for the NNSE corresponds with a 0 value for the NSE, whereas the worst NNSE value is 0. As a consequence, the mean value of the scalar inputs for the SA is for the different variables in our SWAT model smaller than 0.5 and mostly even less than 0.05, which increases the accuracy of the variance estimates. Besides the introduction of this normalized Nash-Sutcliffe efficiency, our presentation will furthermore provide evidence on the influence of the applied objective function on the outcome of the sensitivity analysis. Nossent, J., Elsen, P., Bauwens, W. (2011): Sobol' sensitivity analysis of a complex environmental model. Environmental Modelling & Software. 26 (2011), 1515-1525. Sobol', I.M. (2001): Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation. 55 (1-3), 271-280. Sobol', I.M. (1990): On sensitivity estimation for nonlinear mathematical models. Matematicheskoe Modelirovanie. 112-118.

  13. The Effectiveness of Noninvasive Biomarkers to Predict Hepatitis B-Related Significant Fibrosis and Cirrhosis: A Systematic Review and Meta-Analysis of Diagnostic Test Accuracy

    PubMed Central

    Xu, Xue-Ying; Kong, Hong; Song, Rui-Xiang; Zhai, Yu-Han; Wu, Xiao-Fei; Ai, Wen-Si; Liu, Hong-Bo

    2014-01-01

    Noninvasive biomarkers have been developed to predict hepatitis B virus (HBV)-related fibrosis owing to the significant limitations of liver biopsy. Those biomarkers were initially derived from evaluation of hepatitis C virus (HCV)-related fibrosis, and their accuracy among HBV-infected patients was under constant debate. A systematic review was conducted on records in PubMed, EMBASE and the Cochrane Library electronic databases, up until April 1st, 2013, in order to systematically assess the effectiveness and accuracy of these biomarkers for predicting HBV-related fibrosis. The questionnaire for quality assessment of diagnostic accuracy studies (QUADAS) was used. Out of 115 articles evaluated for eligibility, 79 studies satisfied the pre-determined inclusion criteria for meta-analysis. Eventually, our final data set for the meta-analysis contained 30 studies. The areas under the SROC curve for APRI, FIB-4, and FibroTest of significant fibrosis were 0.77, 0.75, and 0.84, respectively. For cirrhosis, the areas under the SROC curve for APRI, FIB-4 and FibroTest were 0.75, 0.87, and 0.90, respectively. The heterogeneity of FIB-4 and FibroTest were not statistically significant. The heterogeneity of APRI for detecting significant fibrosis was affected by median age (P = 0.0211), and for cirrhosis was affected by etiology (P = 0.0159). Based on the analysis we claim that FibroTest has excellent diagnostic accuracy for identification of HBV-related significant fibrosis and cirrhosis. FIB-4 has modest benefits and may be suitable for wider scope implementation. PMID:24964038

  14. Accuracy of gap analysis habitat models in predicting physical features for wildlife-habitat associations in the southwest U.S.

    USGS Publications Warehouse

    Boykin, K.G.; Thompson, B.C.; Propeck-Gray, S.

    2010-01-01

    Despite widespread and long-standing efforts to model wildlife-habitat associations using remotely sensed and other spatially explicit data, there are relatively few evaluations of the performance of variables included in predictive models relative to actual features on the landscape. As part of the National Gap Analysis Program, we specifically examined physical site features at randomly selected sample locations in the Southwestern U.S. to assess degree of concordance with predicted features used in modeling vertebrate habitat distribution. Our analysis considered hypotheses about relative accuracy with respect to 30 vertebrate species selected to represent the spectrum of habitat generalist to specialist and categorization of site by relative degree of conservation emphasis accorded to the site. Overall comparison of 19 variables observed at 382 sample sites indicated ???60% concordance for 12 variables. Directly measured or observed variables (slope, soil composition, rock outcrop) generally displayed high concordance, while variables that required judgments regarding descriptive categories (aspect, ecological system, landform) were less concordant. There were no differences detected in concordance among taxa groups, degree of specialization or generalization of selected taxa, or land conservation categorization of sample sites with respect to all sites. We found no support for the hypothesis that accuracy of habitat models is inversely related to degree of taxa specialization when model features for a habitat specialist could be more difficult to represent spatially. Likewise, we did not find support for the hypothesis that physical features will be predicted with higher accuracy on lands with greater dedication to biodiversity conservation than on other lands because of relative differences regarding available information. Accuracy generally was similar (>60%) to that observed for land cover mapping at the ecological system level. These patterns demonstrate

  15. Diagnostic Accuracy of Methylated SEPT9 for Blood-based Colorectal Cancer Detection: A Systematic Review and Meta-Analysis

    PubMed Central

    Nian, Jiayun; Sun, Xu; Ming, SuYang; Yan, Chen; Ma, Yunfei; Feng, Ying; Yang, Lin; Yu, Mingwei; Zhang, Ganlin; Wang, Xiaomin

    2017-01-01

    Objectives: More convenient and effective blood-based methods are believed to increase colorectal cancer (CRC) detection adoption. The effectiveness of methylated SPET9 for CRC detection has been reviewed in the newly published recommendation statement by US Preventive Services Task Force (USPSTF), while detailed instructions were not provided, which may be a result of insufficient evidence. Therefore, more evidence is needed to assist practitioners to thoroughly understand the utilization of this special maker. Methods: Based on the standard method, a systematic review and meta-analysis was performed. Quadas-2 was used to assess the methodological quality of studies. Relevant studies were searched and screened from PubMed, Embase and other literature databases up to June 1, 2016. Pooled sensitivity, specificity and diagnostic odds ratio were summarized by bivariate mixed effect model and area under the curve (AUC) was estimated by hierarchical summary receiver operator characteristic curve. Results: 25 studies were included for analysis. The pooled sensitivity, specificity and AUC were 0.71, 0.92 and 0.88, respectively. Among the various methods and assays, Epipro Colon 2.0 with 2/3 algorithm was the most effective in colorectal cancer detection. Positive ratio of mSEPT9 was higher in advanced CRC (45% in I, 70% in II, 76% in III, 79% in IV) and lower differentiation (31% in high, 73% in moderate, 90% in low) tissue. However, this marker has poor ability of identifying precancerous lesions according to current evidence. Conclusions: mSEPT9 is a reliable blood-based marker in CRC detection, particularly advanced CRC. Epipro Colon 2.0 with 2/3 algorithm is currently the optimal method and assay to detect CRC. PMID:28102859

  16. Meta-analysis of time perception and temporal processing in schizophrenia: Differential effects on precision and accuracy.

    PubMed

    Thoenes, Sven; Oberfeld, Daniel

    2017-03-29

    Numerous studies have reported that time perception and temporal processing are impaired in schizophrenia. In a meta-analytical review, we differentiate between time perception (judgments of time intervals) and basic temporal processing (e.g., judgments of temporal order) as well as between effects on accuracy (deviation of estimates from the veridical value) and precision (variability of judgments). In a meta-regression approach, we also included the specific tasks and the different time interval ranges as covariates. We considered 68 publications of the past 65years, and meta-analyzed data from 957 patients with schizophrenia and 1060 healthy control participants. Independent of tasks and interval durations, our results demonstrate that time perception and basic temporal processing are less precise (more variable) in patients (Hedges' g>1.00), whereas effects of schizophrenia on accuracy of time perception are rather small and task-dependent. Our review also shows that several aspects, e.g., potential influences of medication, have not yet been investigated in sufficient detail. In conclusion, the results are in accordance with theoretical assumptions and the notion of a more variable internal clock in patients with schizophrenia, but not with a strong effect of schizophrenia on clock speed. The impairment of temporal precision, however, may also be clock-unspecific as part of a general cognitive deficiency in schizophrenia.

  17. UV-visible microscope spectrophotometric polarization and dichroism with increased discrimination power in forensic analysis

    NASA Astrophysics Data System (ADS)

    Purcell, Dale Kevin

    merit investigated included: 1) wavelength accuracy, 2) wavelength precision, 3) wavelength resolution stability, 4) photometric accuracy, 5) photometric precision, 6) photometric linearity, 7) photometric noise, and 8) short-term baseline stability. In addition, intrinsic instrument polarization effects were investigated to determine the impact of these properties on spectral interpretation and data quality. Finally, a set of recommendations were developed which describe instrument performance characteristics for microscope and spectrometer features and functions, and specific instrument parameters that must be controlled in order to acquire high quality data from an ultraviolet-visible forensic microscope spectrophotometer system for increased discrimination power.

  18. The zero-multipole summation method for estimating electrostatic interactions in molecular dynamics: analysis of the accuracy and application to liquid systems.

    PubMed

    Fukuda, Ikuo; Kamiya, Narutoshi; Nakamura, Haruki

    2014-05-21

    In the preceding paper [I. Fukuda, J. Chem. Phys. 139, 174107 (2013)], the zero-multipole (ZM) summation method was proposed for efficiently evaluating the electrostatic Coulombic interactions of a classical point charge system. The summation takes a simple pairwise form, but prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large energetic noises and significant artifacts. The purpose of this paper is to judge the ability of the ZM method by investigating the accuracy, parameter dependencies, and stability in applications to liquid systems. To conduct this, first, the energy-functional error was divided into three terms and each term was analyzed by a theoretical error-bound estimation. This estimation gave us a clear basis of the discussions on the numerical investigations. It also gave a new viewpoint between the excess energy error and the damping effect by the damping parameter. Second, with the aid of these analyses, the ZM method was evaluated based on molecular dynamics (MD) simulations of two fundamental liquid systems, a molten sodium-chlorine ion system and a pure water molecule system. In the ion system, the energy accuracy, compared with the Ewald summation, was better for a larger value of multipole moment l currently induced until l ≲ 3 on average. This accuracy improvement with increasing l is due to the enhancement of the excess-energy accuracy. However, this improvement is wholly effective in the total accuracy if the theoretical moment l is smaller than or equal to a system intrinsic moment L. The simulation results thus indicate L ∼ 3 in this system, and we observed less accuracy in l = 4. We demonstrated the origins of parameter dependencies appearing in the crossing behavior and the oscillations of the energy error curves. With raising the moment l we observed, smaller values of the damping parameter provided more accurate results and smoother

  19. Improvement of Predictive Accuracy on Subchannel Analysis Code (NASCA) for Tight-Lattice Rod Bundle Tests - Optimization of UEDA'S Entrainment Model Parameter and Cross Flow Model Parameters

    SciTech Connect

    Hiromasa Chitose; Akitoshi Hotta; Akira Ohnuki; Ken Fujimura

    2006-07-01

    The Reduced-Moderation Water Reactor (RMWR) is being developed at Japan Atomic Energy Agency and demonstration of the core heat removal performance is one of the most important issues. However, operation of the full-scale bundle experiment is difficult technically because the fuel rod bundle size is larger, which consumes huge electricity. Hence, it is expected to develop an analysis code for simulating RMWR core thermal-hydraulic performance with high accuracy. Subchannel analysis is the most powerful technique to resolve the problem. A subchannel analysis code NASCA (Nuclear-reactor Advanced Sub-Channel Analysis code) has been developed to improve capabilities of analyzing transient two-phase flow phenomena, boiling transition (BT) and post BT, and the NASCA code is applicable on the thermal-hydraulic analysis for the current BWR fuel. In the present study, the prediction accuracy of the NASCA code has been investigated using the reduced-scale rod bundle test data, and its applicability on the RMWR has been improved by optimizing the mechanistic constitutive models. (authors)

  20. A Balanced Accuracy Fitness Function Leads to Robust Analysis using Grammatical Evolution Neural Networks in the Case of Class Imbalance.

    PubMed

    Hardison, Nicholas E; Fanelli, Theresa J; Dudek, Scott M; Reif, David M; Ritchie, Marylyn D; Motsinger-Reif, Alison A

    2008-01-01

    Grammatical Evolution Neural Networks (GENN) is a computational method designed to detect gene-gene interactions in genetic epidemiology, but has so far only been evaluated in situations with balanced numbers of cases and controls. Real data, however, rarely has such perfectly balanced classes. In the current study, we test the power of GENN to detect interactions in data with a range of class imbalance using two fitness functions (classification error and balanced error), as well as data re-sampling. We show that when using classification error, class imbalance greatly decreases the power of GENN. Re-sampling methods demonstrated improved power, but using balanced accuracy resulted in the highest power. Based on the results of this study, balanced error has replaced classification error in the GENN algorithm.

  1. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  2. Impact of dataset diversity on accuracy and sensitivity of parallel factor analysis model of dissolved organic matter fluorescence excitation-emission matrix

    PubMed Central

    Yu, Huarong; Liang, Heng; Qu, Fangshu; Han, Zheng-shuang; Shao, Senlin; Chang, Haiqing; Li, Guibai

    2015-01-01

    Parallel factor (PARAFAC) analysis enables a quantitative analysis of excitation-emission matrix (EEM). The impact of a spectral variability stemmed from a diverse dataset on the representativeness of the PARAFAC model needs to be examined. In this study, samples from a river, effluent of a wastewater treatment plant, and algae secretion were collected and subjected to PARAFAC analysis. PARAFAC models of global dataset and individual datasets were compared. It was found that the peak shift derived from source diversity undermined the accuracy of the global model. The results imply that building a universal PARAFAC model that can be widely available for fitting new EEMs would be quite difficult, but fitting EEMs to existing PARAFAC model that belong to a similar environment would be more realistic. The accuracy of online monitoring strategy that monitors the fluorescence intensities at the peaks of PARAFAC components was examined by correlating the EEM data with the maximum fluorescence (Fmax) modeled by PARAFAC. For the individual datasets, remarkable correlations were obtained around the peak positions. However, an analysis of cocktail datasets implies that the involvement of foreign components that are spectrally similar to local components would undermine the online monitoring strategy. PMID:25958786

  3. Comparison of the accuracy of Hybrid Capture II and polymerase chain reaction in detecting clinically important cervical dysplasia: a systematic review and meta-analysis

    PubMed Central

    Luu, Hung N; Dahlstrom, Kristina R; Mullen, Patricia Dolan; VonVille, Helena M; Scheurer, Michael E

    2013-01-01

    The effectiveness of screening programs for cervical cancer has benefited from the inclusion of Human papillomavirus (HPV) DNA assays; which assay to choose, however, is not clear based on previous reviews. Our review addressed test accuracy of Hybrid Capture II (HCII) and polymerase chain reaction (PCR) assays based on studies with stronger designs and with more clinically relevant outcomes. We searched OvidMedline, PubMed, and the Cochrane Library for English language studies comparing both tests, published 1985–2012, with cervical dysplasia defined by the Bethesda classification. Meta-analysis provided pooled sensitivity, specificity, and 95% confidence intervals (CIs); meta-regression identified sources of heterogeneity. From 29 reports, we found that the pooled sensitivity and specificity to detect high-grade squamous intraepithelial lesion (HSIL) was higher for HCII than PCR (0.89 [CI: 0.89–0.90] and 0.85 [CI: 0.84–0.86] vs. 0.73 [CI: 0.73–0.74] and 0.62 [CI: 0.62–0.64]). Both assays had higher accuracy to detect cervical dysplasia in Europe than in Asia-Pacific or North America (diagnostic odd ratio – dOR = 4.08 [CI: 1.39–11.91] and 4.56 [CI: 1.86–11.17] for HCII vs. 2.66 [CI: 1.16–6.53] and 3.78 [CI: 1.50–9.51] for PCR) and accuracy to detect HSIL than atypical squamous cells of undetermined significance (ASCUS)/ low-grade squamous intraepithelial lesion (LSIL) (HCII-dOR = 9.04 [CI: 4.12–19.86] and PCR-dOR = 5.60 [CI: 2.87–10.94]). For HCII, using histology as a gold standard results in higher accuracy than using cytology (dOR = 2.87 [CI: 1.31–6.29]). Based on higher test accuracy, our results support the use of HCII in cervical cancer screening programs. The role of HPV type distribution should be explored to determine the worldwide comparability of HPV test accuracy. PMID:23930214

  4. Added value of cost-utility analysis in simple diagnostic studies of accuracy: (18)F-fluoromethylcholine PET/CT in prostate cancer staging.

    PubMed

    Gerke, Oke; Poulsen, Mads H; Høilund-Carlsen, Poul Flemming

    2015-01-01

    Diagnostic studies of accuracy targeting sensitivity and specificity are commonly done in a paired design in which all modalities are applied in each patient, whereas cost-effectiveness and cost-utility analyses are usually assessed either directly alongside to or indirectly by means of stochastic modeling based on larger randomized controlled trials (RCTs). However the conduct of RCTs is hampered in an environment such as ours, in which technology is rapidly evolving. As such, there is a relatively limited number of RCTs. Therefore, we investigated as to which extent paired diagnostic studies of accuracy can be also used to shed light on economic implications when considering a new diagnostic test. We propose a simple decision tree model-based cost-utility analysis of a diagnostic test when compared to the current standard procedure and exemplify this approach with published data from lymph node staging of prostate cancer. Average procedure costs were taken from the Danish Diagnosis Related Groups Tariff in 2013 and life expectancy was estimated for an ideal 60 year old patient based on prostate cancer stage and prostatectomy or radiation and chemotherapy. Quality-adjusted life-years (QALYs) were deduced from the literature, and an incremental cost-effectiveness ratio (ICER) was used to compare lymph node dissection with respective histopathological examination (reference standard) and (18)F-fluoromethylcholine positron emission tomography/computed tomography (FCH-PET/CT). Lower bounds of sensitivity and specificity of FCH-PET/CT were established at which the replacement of the reference standard by FCH-PET/CT comes with a trade-off between worse effectiveness and lower costs. Compared to the reference standard in a diagnostic accuracy study, any imperfections in accuracy of a diagnostic test imply that replacing the reference standard generates a loss in effectiveness and utility. We conclude that diagnostic studies of accuracy can be put to a more extensive use

  5. Collecting kinematic data on a ski/snowboard track with panning, tilting, and zooming cameras: is there sufficient accuracy for a biomechanical analysis?

    PubMed

    Klous, Miriam; Müller, Erich; Schwameder, Hermann

    2010-10-01

    For biomechanical research in several sports (e.g. skiing and snowboarding), field experiments are essential because these activities are performed over a great distance and in conditions that could not be reproduced in a controlled laboratory environment. High technical standards in kinematic set-up are necessary to achieve the required accuracy for biomechanical analysis. The purpose of this study was to determine the accuracy of the kinematic data collected in a ski and snowboard field experiment. Eight tests generally used in laboratory settings were adapted to field conditions on a skiing slope to determine the error related to motion capture. The calculated photogrammetric errors in the x-, y-, and z-direction were 11 mm, 9 mm, and 13 mm, respectively. The maximum error caused by soft tissue artifacts was 39 mm. These results indicate that accuracy of kinematic data in the described field experiment was comparable to that found in literature for laboratory experiments. It may be concluded that accurate kinematic data collection for skiing and snowboarding can be performed in a field setting and that these results are accurate enough to serve as input data for further analyses.

  6. The accuracy of pain drawing in identifying psychological distress in low back pain—systematic review and meta-analysis of diagnostic studies

    PubMed Central

    Bertozzi, Lucia; Rosso, Anna; Romeo, Antonio; Villafañe, Jorge Hugo; Guccione, Andrew A.; Pillastrini, Paolo; Vanti, Carla

    2015-01-01

    The aim of this systematic review and meta-analysis was to estimate the accuracy of qualitative pain drawings (PDs) in identifying psychological distress in subacute and chronic low back pain (LBP) patients. [Subjects and Methods] Data were obtained from searches of PubMed, EBSCO, Scopus, PsycINFO and ISI Web of Science from their inception to July 2014. Quality assessments of bias and applicability were conducted using the Quality of Diagnostic Accuracy Studies-2 (QUADAS-2). [Results] The summary estimates were: sensitivity=0.45 (95% CI 0.34, 0.61), specificity=0.66 (95% CI 0.53, 0.82), positive likelihood ratio=1.23 (95% CI 0.93, 1.62), negative likelihood ratio=0.84 (95% CI 0.70, 1.01), and diagnostic odds ratio=1.46 (95% CI 0.79, 2.68). The area under the curve was 78% (CI, 57 to 99%). [Conclusion] The results of this systematic review do not show broad and unqualified support for the accuracy of PDs in detecting psychological distress in subacute and chronic LBP. PMID:26644701

  7. Effects of Recovery Behavior and Strain-Rate Dependence of Stress-Strain Curve on Prediction Accuracy of Thermal Stress Analysis During Casting

    NASA Astrophysics Data System (ADS)

    Motoyama, Yuichi; Shiga, Hidetoshi; Sato, Takeshi; Kambe, Hiroshi; Yoshida, Makoto

    2017-03-01

    Recovery behavior (recovery) and strain-rate dependence of the stress-strain curve (strain-rate dependence) are incorporated into constitutive equations of alloys to predict residual stress and thermal stress during casting. Nevertheless, few studies have systematically investigated the effects of these metallurgical phenomena on the prediction accuracy of thermal stress in a casting. This study compares the thermal stress analysis results with in situ thermal stress measurement results of an Al-Si-Cu specimen during casting. The results underscore the importance for the alloy constitutive equation of incorporating strain-rate dependence to predict thermal stress that develops at high temperatures where the alloy shows strong strain-rate dependence of the stress-strain curve. However, the prediction accuracy of the thermal stress developed at low temperatures did not improve by considering the strain-rate dependence. Incorporating recovery into the constitutive equation improved the accuracy of the simulated thermal stress at low temperatures. Results of comparison implied that the constitutive equation should include strain-rate dependence to simulate defects that develop from thermal stress at high temperatures, such as hot tearing and hot cracking. Recovery should be incorporated into the alloy constitutive equation to predict the casting residual stress and deformation caused by the thermal stress developed mainly in the low temperature range.

  8. Curriculum-based measurement of oral reading (R-CBM): a diagnostic test accuracy meta-analysis of evidence supporting use in universal screening.

    PubMed

    Kilgus, Stephen P; Methe, Scott A; Maggin, Daniel M; Tomasula, Jessica L

    2014-08-01

    A great deal of research over the past decade has examined the appropriateness of curriculum-based measurement of oral reading (R-CBM) in universal screening. Multiple researchers have meta-analyzed available correlational evidence, yielding support for the interpretation of R-CBM as an indicator of general reading proficiency. In contrast, researchers have yet to synthesize diagnostic accuracy evidence, which pertains to the defensibility of the use of R-CBM for screening purposes. The overall purpose of this research was to therefore conduct the first meta-analysis of R-CBM diagnostic accuracy research. A systematic search of the literature resulted in the identification of 34 studies, including 20 peer-reviewed articles, 7 dissertations, and 7 technical reports. Bivariate hierarchical linear models yielded generalized estimates of diagnostic accuracy statistics, which predominantly exceeded standards for acceptable universal screener performance. For instance, when predicting criterion outcomes within a school year (≤9 months), R-CBM sensitivity ranged between .80 and .83 and specificity ranged between .71 and .73. Multiple moderators of R-CBM diagnostic accuracy were identified, including the (a) R-CBM cut score used to define risk, (b) lag in time between R-CBM and criterion test administration, and (c) percentile rank corresponding to the criterion test cut score through which students were identified as either truly at risk or not at risk. Follow-up analyses revealed substantial variability of extracted cut scores within grade and time of year (i.e., fall, winter, and spring). This result called into question the inflexible application of a single cut score across contexts and suggested the potential necessity of local cut scores. Implications for practices, directions for future research, and limitations are discussed.

  9. Diagnostic test accuracy of loop-mediated isothermal amplification assay for Mycobacterium tuberculosis: systematic review and meta-analysis

    PubMed Central

    Nagai, Kenjiro; Horita, Nobuyuki; Yamamoto, Masaki; Tsukahara, Toshinori; Nagakura, Hideyuki; Tashiro, Ken; Shibata, Yuji; Watanabe, Hiroki; Nakashima, Kentaro; Ushio, Ryota; Ikeda, Misako; Narita, Atsuya; Kanai, Akinori; Sato, Takashi; Kaneko, Takeshi

    2016-01-01

    Diagnostic test accuracy of the loop-mediated isothermal amplification (LAMP) assay for culture proven tuberculosis is unclear. We searched electronic databases for both cohort and case-control studies that provided data to calculate sensitivity and specificity. The index test was any LAMP assay including both commercialized kits and in-house assays. Culture-proven M. tuberculosis was considered a positive reference test. We included 26 studies on 9330 sputum samples and one study on 315 extra-pulmonary specimens. For sputum samples, 26 studies yielded the summary estimates of sensitivity of 89.6% (95% CI 85.6–92.6%), specificity of 94.0% (95% CI 91.0–96.1%), and a diagnostic odds ratio of 145 (95% CI 93–226). Nine studies focusing on Loopamp MTBC yielded the summary estimates of sensitivity of 80.9% (95% CI 76.0–85.1%) and specificity of 96.5% (95% CI 94.7–97.7%). Loopamp MTBC had higher sensitivity and lower specificity for smear-positive sputa compared to smear-negative sputa. In-house assays showed higher sensitivity and lower specificity compared to Loopamp MTBC. LAMP promises to be a useful test for the diagnosis of TB, however there is still need to improve the assay to make it simpler, cheaper and more efficient to make it competitive against other PCR methods already available. PMID:27958360

  10. Personal Verification/Identification via Analysis of the Peripheral ECG Leads: Influence of the Personal Health Status on the Accuracy

    PubMed Central

    Jekova, Irena; Bortolan, Giovanni

    2015-01-01

    Traditional means for identity validation (PIN codes, passwords), and physiological and behavioral biometric characteristics (fingerprint, iris, and speech) are susceptible to hacker attacks and/or falsification. This paper presents a method for person verification/identification based on correlation of present-to-previous limb ECG leads: I (rI), II (rII), calculated from them first principal ECG component (rPCA), linear and nonlinear combinations between rI, rII, and rPCA. For the verification task, the one-to-one scenario is applied and threshold values for rI, rII, and rPCA and their combinations are derived. The identification task supposes one-to-many scenario and the tested subject is identified according to the maximal correlation with a previously recorded ECG in a database. The population based ECG-ILSA database of 540 patients (147 healthy subjects, 175 patients with cardiac diseases, and 218 with hypertension) has been considered. In addition a common reference PTB dataset (14 healthy individuals) with short time interval between the two acquisitions has been taken into account. The results on ECG-ILSA database were satisfactory with healthy people, and there was not a significant decrease in nonhealthy patients, demonstrating the robustness of the proposed method. With PTB database, the method provides an identification accuracy of 92.9% and a verification sensitivity and specificity of 100% and 89.9%. PMID:26568954

  11. Expected accuracy in a measurement of the CKM angle alpha using a Dalitz plot analysis of B0 ---> rho pi decays in the BTeV project

    SciTech Connect

    Shestermanov, K.E.; Vasiliev, A.N; Butler, J.; Derevschikov, A.A.; Kasper, P.; Kiselev, V.V.; Kravtsov, V.I.; Kubota, Y.; Kutschke, R.; Matulenko, Y.A.; Minaev, N.G.; /Serpukhov, IHEP /Fermilab /Minnesota U. /Syracuse U. /INFN, Milan

    2005-12-01

    A precise measurement of the angle {alpha} in the CKM triangle is very important for a complete test of Standard Model. A theoretically clean method to extract {alpha} is provided by B{sup 0} {yields} {rho}{pi} decays. Monte Carlo simulations to obtain the BTeV reconstruction efficiency and to estimate the signal to background ratio for these decays were performed. Finally the time-dependent Dalitz plot analysis, using the isospin amplitude formalism for tre and penguin contributions, was carried out. It was shown that in one year of data taking BTeV could achieve an accuracy on {alpha} better than 5{sup o}.

  12. Clinical Accuracy of the Respiratory Tumor Tracking System of the CyberKnife: Assessment by Analysis of Log Files

    SciTech Connect

    Hoogeman, Mischa Prevost, Jean-Briac; Nuyttens, Joost; Poell, Johan; Levendag, Peter; Heijmen, Ben

    2009-05-01

    Purpose: To quantify the clinical accuracy of the respiratory motion tracking system of the CyberKnife treatment device. Methods and Materials: Data in log files of 44 lung cancer patients treated with tumor tracking were analyzed. Errors in the correlation model, which relates the internal target motion with the external breathing motion, were quantified. The correlation model error was compared with the geometric error obtained when no respiratory tracking was used. Errors in the prediction method were calculated by subtracting the predicted position from the actual measured position after 192.5 ms (the time lag to prediction in our current system). The prediction error was also measured for a time lag of 115 ms and a new prediction method. Results: The mean correlation model errors were less than 0.3 mm. Standard deviations describing intrafraction variations around the whole-fraction mean error were 0.2 to 1.9 mm for cranio-caudal, 0.1 to 1.9 mm for left-right, and 0.2 to 2.5 mm for anterior-posterior directions. Without the use of respiratory tracking, these variations would have been 0.2 to 8.1 mm, 0.2 to 5.5 mm, and 0.2 to 4.4 mm. The overall mean prediction error was small (0.0 {+-} 0.0 mm) for all directions. The intrafraction standard deviation ranged from 0.0 to 2.9 mm for a time delay of 192.5 ms but was halved by using the new prediction method. Conclusions: Analyses of the log files of real clinical cases have shown that the geometric error caused by respiratory motion is substantially reduced by the application of respiratory motion tracking.

  13. Theoretical Analysis of the Accuracy and Safety of MRI-Guided Transurethral 3-D Conformal Ultrasound Prostate Therapy

    NASA Astrophysics Data System (ADS)

    Burtnyk, Mathieu; Chopra, Rajiv; Bronskill, Michael

    2009-04-01

    MRI-guided transurethral ultrasound therapy is a promising new approach for the treatment of localized prostate cancer. Several studies have demonstrated the feasibility of producing large regions of thermal coagulation adequate for prostate therapy; however, the quantitative assessment of shaping these regions to complex 3-D human prostate geometries has not been fully explored. This study used numerical simulations and twenty manually-segmented pelvic anatomical models derived from high-quality MR images of prostate cancer patients to evaluate the treatment accuracy and safety of 3-D conformal MRI-guided transurethral ultrasound therapy. The simulations incorporated a rotating multi-element planar dual-frequency ultrasound transducer (seventeen 4×3 mm elements) operating at 4.7/9.7 MHz and 10 W/cm2 maximum acoustic power. Results using a novel feedback control algorithm which modulated the ultrasound frequency, power and device rate of rotation showed that regions of thermal coagulation could be shaped to predefined prostate volumes within 1.0 mm across the vast majority of these glands. Treatment times were typically 30 min and remained below 60 min for large 60 cc prostates. With a rectal cooling temperature of 15° C, the rectal wall did not exceed 30EM43 in half of the twenty patient models with only a few 1 mm3 voxels above this threshold in the other cases. At 4.7 MHz, heating of the pelvic bone can become significant when it is located less than 10 mm from the prostate. Numerical simulations show that MRI-guided transurethral ultrasound therapy can thermally coagulate whole prostate glands accurately and safely in 3-D.

  14. Noninvasive identification of left main and triple vessel coronary artery disease: improved accuracy using quantitative analysis of regional myocardial stress distribution and washout of thallium-201

    SciTech Connect

    Maddahi, J.; Abdulla, A.; Garcia, E.V.; Swan, H.J.; Berman, D.S.

    1986-01-01

    The capabilities of visual and quantitative analysis of stress redistribution thallium-201 scintigrams, exercise electrocardiography and exercise blood pressure response were compared for correct identification of extensive coronary disease, defined as left main or triple vessel coronary artery disease, or both (50% or more luminal diameter coronary narrowing), in 105 consecutive patients with suspected coronary artery disease. Extensive disease was present in 56 patients and the remaining 49 had either less extensive coronary artery disease (n = 34) or normal coronary arteriograms (n = 15). Although exercise blood pressure response, exercise electrocardiography and visual thallium-201 analysis were highly specific (98, 88 and 96%, respectively), they were insensitive for identification of patients with extensive disease (14, 45 and 16%, respectively). Quantitative thallium-201 analysis significantly improved the sensitivity of visual thallium-201 analysis for identification of patients with extensive disease (from 16 to 63%, p less than 0.001) without a significant loss of specificity (96 versus 86%, p = NS). Eighteen (64%) of the 28 patients who were misclassified by visual analysis as having less extensive disease were correctly classified as having extensive disease by virtue of quantitative analysis of regional myocardial thallium-201 washout. When the results of quantitative thallium-201 analysis were combined with those of blood pressure and electrocardiographic response to exercise, the sensitivity and specificity for identification of patients with extensive disease was 86 and 76%, respectively, and the highest overall accuracy (0.82) was obtained.

  15. Diagnostic accuracy of level 3 portable sleep tests versus level 1 polysomnography for sleep-disordered breathing: a systematic review and meta-analysis

    PubMed Central

    El Shayeb, Mohamed; Topfer, Leigh-Ann; Stafinski, Tania; Pawluk, Lawrence; Menon, Devidas

    2014-01-01

    Background: Greater awareness of sleep-disordered breathing and rising obesity rates have fueled demand for sleep studies. Sleep testing using level 3 portable devices may expedite diagnosis and reduce the costs associated with level 1 in-laboratory polysomnography. We sought to assess the diagnostic accuracy of level 3 testing compared with level 1 testing and to identify the appropriate patient population for each test. Methods: We conducted a systematic review and meta-analysis of comparative studies of level 3 versus level 1 sleep tests in adults with suspected sleep-disordered breathing. We searched 3 research databases and grey literature sources for studies that reported on diagnostic accuracy parameters or disease management after diagnosis. Two reviewers screened the search results, selected potentially relevant studies and extracted data. We used a bivariate mixed-effects binary regression model to estimate summary diagnostic accuracy parameters. Results: We included 59 studies involving a total of 5026 evaluable patients (mostly patients suspected of having obstructive sleep apnea). Of these, 19 studies were included in the meta-analysis. The estimated area under the receiver operating characteristics curve was high, ranging between 0.85 and 0.99 across different levels of disease severity. Summary sensitivity ranged between 0.79 and 0.97, and summary specificity ranged between 0.60 and 0.93 across different apnea–hypopnea cut-offs. We saw no significant difference in the clinical management parameters between patients who underwent either test to receive their diagnosis. Interpretation: Level 3 portable devices showed good diagnostic performance compared with level 1 sleep tests in adult patients with a high pretest probability of moderate to severe obstructive sleep apnea and no unstable comorbidities. For patients suspected of having other types of sleep-disordered breathing or sleep disorders not related to breathing, level 1 testing remains the

  16. Radiologic-Pathologic Analysis of Contrast-enhanced and Diffusion-weighted MR Imaging in Patients with HCC after TACE: Diagnostic Accuracy of 3D Quantitative Image Analysis

    PubMed Central

    Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.

    2014-01-01

    Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual

  17. Assessing the Classification Accuracy of Early Numeracy Curriculum-Based Measures Using Receiver Operating Characteristic Curve Analysis

    ERIC Educational Resources Information Center

    Laracy, Seth D.; Hojnoski, Robin L.; Dever, Bridget V.

    2016-01-01

    Receiver operating characteristic curve (ROC) analysis was used to investigate the ability of early numeracy curriculum-based measures (EN-CBM) administered in preschool to predict performance below the 25th and 40th percentiles on a quantity discrimination measure in kindergarten. Areas under the curve derived from a sample of 279 students ranged…

  18. The detection accuracy of cone beam CT for osseous defects of the temporomandibular joint: a systematic review and meta-analysis

    PubMed Central

    Ma, Ruo-han; Yin, Shuang; Li, Gang

    2016-01-01

    The purpose of this review was to evaluate whether cone-beam computed tomography (CBCT) is reliable for the detection of bone changes of the temporomandibular joint (TMJ). Studies collected from the PubMed, Web of Science, Cochrane Library, ScienceDirect, Embase, Wanfang and CNKI databases were searched, and the publishing time was limited from January 1990 to December 2015. Eight studies (23 experimental research groups) were eventually included for further analysis. The pooled sensitivity was 0.67 and the pooled specificity was 0.87, which leads to a relatively large area (0.84) under the Receiver Operating Characteristic (ROC) curve. The related pooled positive likelihood ratio (+LR) and the pooled negative likelihood ratio (−LR) were 5.2 and 0.38, respectively. The subgroup analysis was conducted for four subgroups categorized by voxel size (≤0.2; >0.2, ≤0.3; >0.3, ≤0.4; >0.4, and ≤0.5 (mm)), and the “>0.4, ≤0.5” subgroup had a higher pooled sensitivity and pooled specificity than the other groups. The present study demonstrates that CBCT has a relatively high diagnostic accuracy for TMJ bone changes, although its reliability is limited. Voxel size did not play a role in the accuracy of CBCT. PMID:27708375

  19. Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG.

    PubMed

    Hauk, O; Keil, A; Elbert, T; Müller, M M

    2002-01-30

    We describe a methodology to apply current source density (CSD) and minimum norm (MN) estimation as pre-processing tools for time-series analysis of single trial EEG data. The performance of these methods is compared for the case of wavelet time-frequency analysis of simulated gamma-band activity. A reasonable comparison of CSD and MN on the single trial level requires regularization such that the corresponding transformed data sets have similar signal-to-noise ratios (SNRs). For region-of-interest approaches, it should be possible to optimize the SNR for single estimates rather than for the whole distributed solution. An effective implementation of the MN method is described. Simulated data sets were created by modulating the strengths of a radial and a tangential test dipole with wavelets in the frequency range of the gamma band, superimposed with simulated spatially uncorrelated noise. The MN and CSD transformed data sets as well as the average reference (AR) representation were subjected to wavelet frequency-domain analysis, and power spectra were mapped for relevant frequency bands. For both CSD and MN, the influence of noise can be sufficiently suppressed by regularization to yield meaningful information, but only MN represents both radial and tangential dipole sources appropriately as single peaks. Therefore, when relating wavelet power spectrum topographies to their neuronal generators, MN should be preferred.

  20. The accuracy of auditors' and layered voice Analysis (LVA) operators' judgments of truth and deception during police questioning.

    PubMed

    Horvath, Frank; McCloughan, Jamie; Weatherman, Dan; Slowik, Stanley

    2013-03-01

    The purpose of this study was to determine if auditors could identify truthful and deceptive persons in a sample (n = 74) of audio recordings used to assess the effectiveness of layered voice analysis (LVA). The LVA employs an automated algorithm to detect deception, but it was not effective here. There were 31 truthful and 43 deceptive persons in the sample and two LVA operators averaged 48% correct decisions on truth-tellers and 25% on deceivers. Subsequent to the LVA analysis the recordings were audited by three interviewers, each independently rendering a decision of truthful or deceptive and indicating their confidence. Auditors' judgments averaged 68% correct decisions on truth-tellers and 71% on deceivers. Auditors' detection rates, generally, exceeded chance and there was significantly (p < 0.05) greater confidence on correct than incorrect judgments of deceivers but not on truth-tellers. These results suggest that the success reported for LVA analysis may be due to operator's judgment.

  1. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  2. Diagnostic test accuracy of anti-glycopeptidolipid-core IgA antibodies for Mycobacterium avium complex pulmonary disease: systematic review and meta-analysis

    PubMed Central

    Shibata, Yuji; Horita, Nobuyuki; Yamamoto, Masaki; Tsukahara, Toshinori; Nagakura, Hideyuki; Tashiro, Ken; Watanabe, Hiroki; Nagai, Kenjiro; Nakashima, Kentaro; Ushio, Ryota; Ikeda, Misako; Narita, Atsuya; Kanai, Akinori; Sato, Takashi; Kaneko, Takeshi

    2016-01-01

    Currently, an anti-glycopeptidolipid (GPL)-core IgA antibody assay kit for diagnosing Mycobacterium avium complex (MAC) is commercially available. We conducted this systematic review and meta-analysis to reveal the precise diagnostic accuracy of anti-GPL-core IgA antibodies for MAC pulmonary disease (MAC-PD). We systematically searched reports that could provide data for both sensitivity and specificity by anti-GPL-core IgA antibody for clinically diagnosed MAC-PD. Diagnostic test accuracy was estimated using the bivariate model. Of the 257 articles that we had found through primary search, we finally included 16 reports consisted of 1098 reference positive subjects and 2270 reference negative subjects. The diagnostic odds ratio was 24.8 (95% CI 11.6–52.8, I2 = 5.5%) and the area under the hierarchical summary receiver operating characteristic curves was 0.873 (95% CI 0.837–0.913). With a cutoff value of 0.7 U/mL, the summary estimates of sensitivity and specificity were 0.696 (95% CI 0.621–0.761) and 0.906 (95% CI 0.836–0.951), respectively. The positive and negative likelihood ratios were 7.4 (95% CI 4.1–13.8) and 0.34 (95% CI 0.26–0.43), respectively. The demanding clinical diagnostic criteria may be a cause of false positive of the index test. The index test had good overall diagnostic accuracy and was useful to ruling in MAC-PD with the cutoff value. PMID:27373718

  3. Accuracy of three-dimensional measurements obtained from cone beam computed tomography surface-rendered images for cephalometric analysis: influence of patient scanning position.

    PubMed

    Hassan, Bassam; van der Stelt, Paul; Sanderink, Gerard

    2009-04-01

    The aims of this study were to assess the accuracy of linear measurements on three-dimensional (3D) surface-rendered images generated from cone beam computed tomography (CBCT) in comparison with two-dimensional (2D) slices and 2D lateral and postero-anterior (PA) cephalometric projections, and to investigate the influence of patient head position in the scanner on measurement accuracy. Eight dry human skulls were scanned twice using NewTom 3G CBCT in an ideal and a rotated position and the resulting datasets were used to create 3D surface-rendered images, 2D tomographic slices, and 2D lateral and PA projections. Ten linear distances were defined for cephalometric measurements. The physical and radiographic measurements were repeated twice by three independent observers and were compared using repeated measures analysis of variance (P=0.05). The radiographic measurements were also compared between the ideal and the rotated scan positions. The radiographic measurements of the 3D images were closer to the physical measurements than the 2D slices and 2D projection images. No statistically significant difference was found between the ideal and the rotated scan measurements for the 3D images and the 2D tomographic slices. A statistically significant difference (P<0.001) was observed between the ideal and rotated scan positions for the 2D projection images. The findings indicate that measurements based on 3D CBCT surface images are accurate and that small variations in the patient's head position do not influence measurement accuracy.

  4. Evaluation of Accuracy for 2D Elastic-Plastic Analysis by Embedded Force Doublet Model Combined with Automated Delaunay Tessellation

    NASA Astrophysics Data System (ADS)

    Ino, Takuichiro; Hasib, M. D. Abdul; Takase, Toru; Saimoto, Akihide

    2015-03-01

    An embedded force doublet (EFD) model is proposed to express the presence of permanent strain in body force method (BFM). BFM is known as a boundary type method for elastic stress analysis based on the principle of superposition. In EFD model, the permanent strain is replaced by distributed force doublets. In an actual elastic-plastic analysis, the plastic region whose shape is not clear a priori, have to be discretized into elements where the magnitude of embedded force doublets is unknown to be determined numerically. In general, the determination process of magnitude of EFD is considerably difficult due to nonlinear nature of yield criterion and plastic constitutive relations. In this study, by introducing the automated Delaunay tessellation scheme for discretizing the prospective plastic region, appreciable reduction in input data was realized. Adding to this, in order to improve the computational efficiency, influence coefficients used for determining the magnitude of EFD are stored in a database. The effectiveness of these two inventions was examined by computing the elastic-plastic problem of an infinite medium with circular hole subjected to uniform internal pressure. The numerical solution was compared with Nadai’s closed form solution and found a good agreement.

  5. JASMINE -- Japan Astrometry Satellite Mission for INfrared Exploration: Data Analysis and Accuracy Assessment with a Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.

    2009-09-01

    For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.

  6. Regression Modeling and Meta-Analysis of Diagnostic Accuracy of SNP-Based Pathogenicity Detection Tools for UGT1A1 Gene Mutation

    PubMed Central

    Rahim, Fakher; Galehdari, Hamid; Mohammadi-asl, Javad; Saki, Najmaldin

    2013-01-01

    Aims. This review summarized all available evidence on the accuracy of SNP-based pathogenicity detection tools and introduced regression model based on functional scores, mutation score, and genomic variation degree. Materials and Methods. A comprehensive search was performed to find all mutations related to Crigler-Najjar syndrome. The pathogenicity prediction was done using SNP-based pathogenicity detection tools including SIFT, PHD-SNP, PolyPhen2, fathmm, Provean, and Mutpred. Overall, 59 different SNPs related to missense mutations in the UGT1A1 gene, were reviewed. Results. Comparing the diagnostic OR, our model showed high detection potential (diagnostic OR: 16.71, 95% CI: 3.38–82.69). The highest MCC and ACC belonged to our suggested model (46.8% and 73.3%), followed by SIFT (34.19% and 62.71%). The AUC analysis showed a significance overall performance of our suggested model compared to the selected SNP-based pathogenicity detection tool (P = 0.046). Conclusion. Our suggested model is comparable to the well-established SNP-based pathogenicity detection tools that can appropriately reflect the role of a disease-associated SNP in both local and global structures. Although the accuracy of our suggested model is not relatively high, the functional impact of the pathogenic mutations is highlighted at the protein level, which improves the understanding of the molecular basis of mutation pathogenesis. PMID:23997956

  7. Accuracy of imaging parameters in the prediction of lethal pulmonary hypoplasia secondary to mid-trimester prelabor rupture of fetal membranes: a systematic review and meta-analysis.

    PubMed

    van Teeffelen, A S P; Van Der Heijden, J; Oei, S G; Porath, M M; Willekes, C; Opmeer, B; Mol, B W J

    2012-05-01

    In women who have suffered mid-trimester prelabor rupture of membranes (PPROM), prediction of pulmonary hypoplasia is important for optimal management. We performed a systematic review to assess the capacity of imaging parameters to predict pulmonary hypoplasia. We searched for published articles that reported on biometric parameters and allowed the construction of a 2 × 2 table, comparing at least one of these parameters with the occurrence of pulmonary hypoplasia. The selected studies were scored on methodological quality and we calculated sensitivity and specificity of the tests in the prediction of pulmonary hypoplasia and lethal pulmonary hypoplasia. Overall performance was assessed by summary receiver-operating characteristics (sROC) analyses that were performed with bivariate meta-analysis. We detected 13 studies that reported on the prediction of lethal pulmonary hypoplasia. The quality of the included studies was poor to mediocre. The estimated sROC curves for the chest circumference/abdominal circumference ratio and other parameters showed limited accuracy in the prediction of pulmonary hypoplasia. In women with mid-trimester PPROM, the available evidence indicates limited accuracy of biometric parameters in the prediction of pulmonary hypoplasia.

  8. Accuracy assessment on the analysis of unbound drug in plasma by comparing traditional centrifugal ultrafiltration with hollow fiber centrifugal ultrafiltration and application in pharmacokinetic study.

    PubMed

    Zhang, Lin; Zhang, Zhi-Qing; Dong, Wei-Chong; Jing, Shao-Jun; Zhang, Jin-Feng; Jiang, Ye

    2013-11-29

    In present study, accuracy assessment on the analysis of unbound drug in plasma was made by comparing traditional centrifugal ultrafiltration (CF-UF) with hollow fiber centrifugal ultrafiltration (HFCF-UF). We used metformin (MET) as a model drug and studied the influence of centrifugal time, plasma condition and freeze-thaw circle times on the ultrafiltrate volume and related effect on the measurement of MET. Our results demonstrated that ultrafiltrate volume was a crucial factor which influenced measurement accuracy of unbound drug in plasma. For traditional CF-UF, the ultrafiltrate volume cannot be well-controlled due to a series of factors. Compared with traditional CF-UF, the ultrafiltrate volume by HFCF-UF can be easily controlled by the inner capacity of the U-shaped hollow fiber inserted into the sample under enough centrifugal force and centrifugal time, which contributes to a more accurate measurement. Moreover, the developed HFCF-UF method achieved a successful application in real plasma samples and exhibited several advantages including high precision, extremely low detection limit and perfect recovery. The HFCF-UF method offers the advantage of highly satisfactory performance in addition to being simple and fast in pretreatment, with these characteristics being consistent with the practicability requirements in current scientific research.

  9. Combined behavioral and EEG power analysis in DAI improve accuracy in the assessment of sustained attention deficit.

    PubMed

    Molteni, Erika; Bianchi, Anna Maria; Butti, Michele; Reni, Gianluigi; Zucca, Claudio

    2008-07-01

    In clinical routine, the evaluation of sustained attention is often performed by analyzing the behavioral data collected during specific tests. Such analyses are rarely accompanied by a detailed examination of the subject's simultaneous electroencephalographic (EEG) activity, and particularly its frequency content. In this study, a group of healthy volunteers and a group of patients affected by diffuse axonal injury (DAI) were tested while performing a modified version of the Conners' continuous performance test. A comparative study was carried out between the behavioral and neuropsychological data obtained during the task, to investigate neural activation. Spectral power was calculated for each of the recorded EEG signals, taking account of the frequency bands traditionally considered in literature. Then a compressed spectral array sequence of spectra was plotted to put into evidence the temporal modifications in the signal power spectral density, and, finally, the analysis of the rhythm variability was carried out. Evaluation of the results thus obtained shows that the two groups registered very different cerebral activation dynamics during the ongoing attentional task. Moreover, DAI patients showed mild cortical activation in the prefrontal region, spread equally throughout both brain hemispheres, while controls showed strong predominant activation of the right prefrontal area. Our findings encourage further investigations of the combined employment of tests and EEG recordings during the clinical assessment of sustained attention performance.

  10. Accuracy of the fast multipole boundary element method with quadratic elements in the analysis of 3D porous structures

    NASA Astrophysics Data System (ADS)

    Ptaszny, Jacek

    2015-09-01

    In this work, a fast multipole boundary element method for 3D elasticity problem was developed by the application of the fast multipole algorithm and isoparametric 8-node boundary elements with quadratic shape functions. The problem is described by the boundary integral equation involving the Kelvin solutions. In order to keep the numerical integration error on appropriate level, an adaptive method with subdivision of boundary elements into subelements, described in the literature, was applied. An extension of the neighbour list of boundary element clusters, corresponding to near-field computations, was proposed in order to reduce the truncation error of expansions in problems with high stress concentration. Efficiency of the method is illustrated by numerical examples including a solid with single spherical cavity, solids with two interacting spherical cavities, and numerical homogenization of solids with cubic arrangement of spherical cavities. All results agree with analytical models available in the literature. The examples show that the method can be applied to the analysis of porous structures.

  11. Quantitative Thin-Film X-ray Microanalysis by STEM/HAADF: Statistical Analysis for Precision and Accuracy Determination

    NASA Astrophysics Data System (ADS)

    Armigliato, Aldo; Balboni, Roberto; Rosa, Rodolfo

    2006-07-01

    Silicon-germanium thin films have been analyzed by EDS microanalysis in a field emission gun scanning transmission electron microscope (FEG-STEM) equipped with a high angular dark-field detector (STEM/HAADF). Several spectra have been acquired in the same homogeneous area of the cross-sectioned sample by drift-corrected linescan acquisitions. The Ge concentrations and the local film thickness have been obtained by using a previously described Monte Carlo based “two tilt angles” method. Although the concentrations are in excellent agreement with the known values, the resulting confidence intervals are not as good as expected from the precision in beam positioning and tilt angle position and readout offered by our state-of-the-art microscope. The Gaussian shape of the SiK[alpha] and GeK[alpha] X-ray intensities allows one to use the parametric bootstrap method of statistics, whereby it becomes possible to perform the same quantitative analysis in sample regions of different compositions and thicknesses, but by doing only one measurement at the two angles.

  12. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  13. SU-E-T-248: Near Real-Time Analysis of Radiation Delivery and Imaging, Accuracy to Ensure Patient Safety

    SciTech Connect

    Wijesooriya, K; Seitter, K; Desai, V; Read, P; Larner, J

    2014-06-01

    Purpose: To develop and optimize an effective software method for comparing planned to delivered control point machine parameters for all VARIAN TrueBeam treatments so as to permit (1) assessment of a large patient pool throughout their treatment course to quantify treatment technique specific systematic and random uncertainty of observables, (2) quantify the site specific daily imaging shifts required for target alignment, and (3) define tolerance levels for mechanical parameters and imaging parameters based on statistical analysis data gathered, and the dosimetric impact of variations. Methods: Treatment and imaging log files were directly compared to plan parameters for Eclipse and Pinnacle planned treatments via 3D, IMRT, control point, RapidArc, and electrons. Each control point from all beams/arcs (7984) for all fractions (1940) of all patients treated over six months were analyzed. At each control point gantry angle, collimator angle, couch angle, jaw positions, MLC positions, MU were compared. Additionally per-treatment isocenter shifts were calculated. Results were analyzed as a whole in treatment type subsets: IMRT, 3D, RapidArc; and in treatment site subsets: brain, chest/mediastinum, esophagus, H and N, lung, pelvis, prostate. Results: Daily imaging isocenter shifts from initial external tattoo alignment were dependent on the treatment site with < 0.5 cm translational shifts for H and N, Brain, and lung SBRT, while pelvis, esophagus shifts were ∼1 cm. Mechanical delivery parameters were within tolerance levels for all sub-beams. The largest variations were for RapidArc plans: gantry angle 0.11±0.12,collimator angle 0.00±0.00, jaw positions 0.48±0.26, MLC leaf positions 0.66±0.08, MU 0.14±0.34. Conclusion: Per-control point validation reveals deviations between planned and delivered parameters. If used in a near real-time error checking system, patient safety can be improved by equipping the treatment delivery system with additional forcing

  14. Diagnostic accuracy of ultrasound for detecting posterior ligamentous complex injuries of the thoracic and lumbar spine: A systematic review and meta-analysis

    PubMed Central

    Gabriel, Alcalá-Cerra; Ángel, J. Paternina-Caicedo; Juan, J. Gutiérrez-Paternina; Luis, R. Moscote-Salazar; Hernando, R. Alvis-Miranda; Rubén, Sabogal-Barrios

    2013-01-01

    Background: Posterior ligamentous complex injuries of the thoracolumbar (TL) spine represent a major consideration during surgical decision-making. However, X-ray and computed tomography imaging often does not identify those injuries and sometimes magnetic resonance imaging (MRI) is not available or is contraindicated. Objective: To determine the diagnostic accuracy of the ultrasound for detecting posterior ligamentous complex injuries in the TL spine. Materials and Methods: A systematic review was carried out through four international databases and proceedings of scientific meetings. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, and their 95% confidence intervals (CIs) were estimated, by using weighted averages according to the sample size of each study. Summary receiver operating characteristic was also estimated. Results: A total of four articles were included in the meta-analysis, yielding a summary estimate: Sensitivity, 0.89 (95% CI, 0.86-0.92); specificity, 1.00 (95% CI, 0.98-1.00); positive likelihood ratio, 224.49 (95% CI, 30.43-1656.26); negative likelihood ratio, 0.11 (95% CI, 0.05-0.19); and diagnostic odds ratio, 2,268.13 (95% CI, 265.84-19,351.24). There was no statistically significant heterogeneity among results of included studies. Summary: Receiver operating characteristic (±standard error) was 0.928 ± 0.047. Conclusion and Recommendation: The present meta-analysis showed that ultrasound has a high accuracy for diagnosing posterior ligamentous complex injuries in patients with flexion distraction, compression, or burst TL fractures. On the basis of present results, ultrasound may be considered as a useful alternative when magnetic resonance imaging (MRI) is unavailable or contraindicated, or when its results are inconclusive. PMID:24381453

  15. Effect of heart rate on the diagnostic accuracy of 256-slice computed tomography angiography in the detection of coronary artery stenosis: ROC curve analysis

    PubMed Central

    WANG, GANG; WU, YIFEN; ZHANG, ZHENTAO; ZHENG, XIAOLIN; ZHANG, YULAN; LIANG, MANQIU; YUAN, HUANCHU; SHEN, HAIPING; LI, DEWEI

    2016-01-01

    The aim of the present study was to investigate the effect of heart rate (HR) on the diagnostic accuracy of 256-slice computed tomography angiography (CTA) in the detection of coronary artery stenosis. Coronary imaging was performed using a Philips 256-slice spiral CT, and receiver operating characteristic (ROC) curve analysis was conducted to evaluate the diagnostic value of 256-slice CTA in coronary artery stenosis. The HR of the research subjects in the study was within a certain range (39–107 bpm). One hundred patients suspected of coronary heart disease underwent 256-slice CTA examination. The cases were divided into three groups: Low HR (HR <75 bpm), moderate HR (75≤ HR <90 bpm) and high HR (HR ≥90 bpm). For the three groups, two observers independently assessed the image quality for all coronary segments on a four-point ordinal scale. An image quality of grades 1–3 was considered diagnostic, while grade 4 was non-diagnostic. A total of 97.76% of the images were diagnostic in the low-HR group, 96.86% in the moderate-HR group and 95.80% in the high-HR group. According to the ROC curve analysis, the specificity of CTA in diagnosing coronary artery stenosis was 98.40, 96.00 and 97.60% in the low-, moderate- and high-HR groups, respectively. In conclusion, 256-slice coronary CTA can be used to clearly show the main segments of the coronary artery and to effectively diagnose coronary artery stenosis. Within the range of HRs investigated, HR was found to have no significant effect on the diagnostic accuracy of 256-slice coronary CTA for coronary artery stenosis. PMID:27168831

  16. Diagnostic accuracy of transient elastography (FibroScan) in detection of esophageal varices in patients with cirrhosis: A meta-analysis

    PubMed Central

    Pu, Ke; Shi, Jing-Hong; Wang, Xu; Tang, Qian; Wang, Xin-Jie; Tang, Kai-Lin; Long, Zhong-Qi; Hu, Xing-Sheng

    2017-01-01

    AIM To investigate the diagnostic accuracy of FibroScan (FS) in detecting esophageal varices (EV) in cirrhotic patients. METHODS Through a systemic literature search of multiple databases, we reviewed 15 studies using endoscopy as a reference standard, with the data necessary to calculate pooled sensitivity (SEN) and specificity (SPE), positive and negative LR, diagnostic odds ratio (DOR) and area under receiver operating characteristics (AUROC). The quality of the studies was rated by the Quality Assessment of Diagnostic Accuracy studies-2 tool. Clinical utility of FS for EV was evaluated by a Fagan plot. Heterogeneity was explored using meta-regression and subgroup analysis. All statistical analyses were conducted via Stata12.0, MetaDisc1.4 and RevMan5. RESULTS In 15 studies (n = 2697), FS detected the presence of EV with the summary sensitivities of 84% (95%CI: 81.0%-86.0%), specificities of 62% (95%CI: 58.0%- 66.0%), a positive LR of 2.3 (95%CI: 1.81-2.94), a negative LR of 0.26 (95%CI: 0.19-0.35), a DOR of 9.33 (95%CI: 5.84-14.92) and an AUROC of 0.8262. FS diagnosed the presence of large EV with the pooled SEN of 0.78 (95%CI: 75.0%-81.0%), SPE of 0.76 (95%CI: 73.0%-78.0%), a positive and negative LR of 3.03 (95%CI: 2.38-3.86) and 0.30 (95%CI: 0.23-0.39) respectively, a summary diagnostic OR of 10.69 (95%CI: 6.81-16.78), and an AUROC of 0.8321. A meta-regression and subgroup analysis indicated different etiology could serve as a potential source of heterogeneity in the diagnosis of the presence of EV group. A Deek’s funnel plot suggested a low probability for publication bias. CONCLUSION Using FS to measure liver stiffness cannot provide high accuracy for the size of EV due to the various cutoff and different etiologies. These limitations preclude widespread use in clinical practice at this time; therefore, the results should be interpreted cautiously given its SEN and SPE. PMID:28127208

  17. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  18. Correlative and multivariate analysis of increased radon concentration in underground laboratory.

    PubMed

    Maletić, Dimitrije M; Udovičić, Vladimir I; Banjanac, Radomir M; Joković, Dejan R; Dragić, Aleksandar L; Veselinović, Nikola B; Filipović, Jelena

    2014-11-01

    The results of analysis using correlative and multivariate methods, as developed for data analysis in high-energy physics and implemented in the Toolkit for Multivariate Analysis software package, of the relations of the variation of increased radon concentration with climate variables in shallow underground laboratory is presented. Multivariate regression analysis identified a number of multivariate methods which can give a good evaluation of increased radon concentrations based on climate variables. The use of the multivariate regression methods will enable the investigation of the relations of specific climate variable with increased radon concentrations by analysis of regression methods resulting in 'mapped' underlying functional behaviour of radon concentrations depending on a wide spectrum of climate variables.

  19. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  20. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  1. Accuracy of non-invasive prenatal testing using cell-free DNA for detection of Down, Edwards and Patau syndromes: a systematic review and meta-analysis

    PubMed Central

    Taylor-Phillips, Sian; Freeman, Karoline; Geppert, Julia; Agbebiyi, Adeola; Uthman, Olalekan A; Madan, Jason; Clarke, Angus; Quenby, Siobhan; Clarke, Aileen

    2016-01-01

    Objective To measure test accuracy of non-invasive prenatal testing (NIPT) for Down, Edwards and Patau syndromes using cell-free fetal DNA and identify factors affecting accuracy. Design Systematic review and meta-analysis of published studies. Data sources PubMed, Ovid Medline, Ovid Embase and the Cochrane Library published from 1997 to 9 February 2015, followed by weekly autoalerts until 1 April 2015. Eligibility criteria for selecting studies English language journal articles describing case–control studies with ≥15 trisomy cases or cohort studies with ≥50 pregnant women who had been given NIPT and a reference standard. Results 41, 37 and 30 studies of 2012 publications retrieved were included in the review for Down, Edwards and Patau syndromes. Quality appraisal identified high risk of bias in included studies, funnel plots showed evidence of publication bias. Pooled sensitivity was 99.3% (95% CI 98.9% to 99.6%) for Down, 97.4% (95.8% to 98.4%) for Edwards, and 97.4% (86.1% to 99.6%) for Patau syndrome. The pooled specificity was 99.9% (99.9% to 100%) for all three trisomies. In 100 000 pregnancies in the general obstetric population we would expect 417, 89 and 40 cases of Downs, Edwards and Patau syndromes to be detected by NIPT, with 94, 154 and 42 false positive results. Sensitivity was lower in twin than singleton pregnancies, reduced by 9% for Down, 28% for Edwards and 22% for Patau syndrome. Pooled sensitivity was also lower in the first trimester of pregnancy, in studies in the general obstetric population, and in cohort studies with consecutive enrolment. Conclusions NIPT using cell-free fetal DNA has very high sensitivity and specificity for Down syndrome, with slightly lower sensitivity for Edwards and Patau syndrome. However, it is not 100% accurate and should not be used as a final diagnosis for positive cases. Trial registration number CRD42014014947. PMID:26781507

  2. An In-vitro Comparative Stereomicroscopic Analysis and Evaluation of Marginal Accuracy in Porcelain Fused to Metal Copings Fabricated in Two Different Finish Lines Using Variant Die Materials

    PubMed Central

    Sanyal, Pronob K; Gosavi, Siddharth Y; Kore, Abhijeet R

    2017-01-01

    Introduction Limited published information is available about the influence of preparatory designs and die materials on marginal accuracy of porcelain fused to metal copings using recently developed die materials. Aim To detect the influence of margin geometries and dimensional accuracy of contemporary die materials on vertical marginal gaps in Porcelain fused to metal coping using a Stereomicroscope (three dimensional analysis). Materials and Method Two chrome cobalt alloy models of mandibular first molars prepared to have shoulder and deep chamfer finish lines were CAD-CAM milled. Elastomeric impressions of these models were made in a custom tray, poured in Type IV Gypsum(n=10) and Resin modified Gypsum(n=10) and also packed with Epoxy resin (n=10) as a die material to form a total of 60 samples, 30 in each group (shoulder and deep chamfer). Wax patterns were fabricated, invested and castings in ceramic alloy were obtained in traditional manner. These copings were later analyzed on CAD/CAM models using stereomicroscope. Results Both the designs did not exhibit significant difference (p<0.05). Whereas, the three die materials exhibited significant difference (p<0.05) by Two way ANOVA test and Tukey’s multiple Post Hoc test. Results from this study showed that vertical marginal gaps for copings fabricated on resin modified gypsum as a die material were within the clinically acceptable range. Conclusion Margin geometries both shoulder and deep chamfer have equal influence on vertical marginal gaps in metal ceramic restorations. Copings fabricated on Epoxy resin dies exhibited highest value of vertical marginal discrepancy, where as least value was determined for copings constructed on dies fabricated from resin modified gypsum. PMID:28274033

  3. Accuracy of a Brief Neuropsychological Battery for the Diagnosis of Dementia and Mild Cognitive Impairment: An Analysis of the NEDICES Cohort.

    PubMed

    Serna, Adriana; Contador, Israel; Bermejo-Pareja, Félix; Mitchell, Alex J; Fernández-Calvo, Bernardino; Ramos, Francisco; Villarejo, Alberto; Benito-León, Julián

    2015-01-01

    Early separation of mild cognitive impairment (MCI) from normal aging and mild cases of dementia remains a challenge, especially in the general population. We aimed to analyze the diagnostic accuracy of a brief neuropsychological battery (BNB) in dementia and MCI cases from the Neurological Disorders in Central Spain (NEDICES) population-based cohort study. We screened 3,891 participants into dementia and non-dementia groups using a two-phase procedure: screening (MMSE-37 and Pfeffer-11) and clinical diagnosis by specialists (DSM-IV criteria). We selected subsequently a subsample of dementia (n = 98), MCI (n = 71), and cognitively healthy (n = 123) participants matched in socio-demographic characteristics. The clinical validity of each test of the BNB was determined by the area under the ROC curve. We determined the best combination of tests to classify individuals into the diagnostic groups by logistic regression analyses. The results indicated that dementia and MCI groups could be best discriminated from the healthy control group on the basis of their scores on the semantic verbal fluency and delayed recall subtests of the BNB. As for discriminating the MCI group from the dementia group, immediate recall tasks (stories and pictures) yielded the highest level of accuracy. Probably the most interesting finding is that the verbal fluency task consistently allowed discrimination among the diagnostic groups. Overall, subtests of the BNB are more accurate in differentiating dementia patients than MCI patients from healthy controls. In this population-based sample, a more fine-grained discrimination that includes MCI patients should follow a systematic subtest-wise analysis and decision.

  4. Using Rasch Analysis to Evaluate Accuracy of Individual Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL) for Disability Measurement.

    PubMed

    Friedman, Bruce; Li, Yanen

    2015-01-01

    Our study objectives were to examine the accuracy of individual activities of daily living (ADLs) and instrumental ADLs (IADLs) for disability measurement, and determine whether dependence or difficulty is more useful for disability measurement. We analyzed data from 499 patients with 2+ ADLs or 3+ IADLs who participated in a home visiting nurse intervention study, and whose function had been assessed at study baseline and 22 months. Rasch analysis was used to evaluate accuracy of 24 individual ADL and IADL items. The individual items differed in the amount of information provided in measuring functional disability along the range of disability, providing much more information in (usually) one part of the range. While nearly all of the Item Information Curves (IICs) for the ADL dependence, IADL difficulty, and IADL dependence items were unimodal with one information peak each, the IICs for ADL difficulty exhibited a bimodal pattern with two peaks. Which of the individual items performed better in disability measurement varied by the extent of functional disability (i.e., by how disabled the patients were). The information peaks of most ADLs and many IADLs rise or drop steeply in a relatively short distance. Thus, whether dependence or difficulty is superior often changes very quickly along the disability continuum. There was considerable heterogeneity in which individual items provided the most and the least information at the three points of interest examined across the disability range (-2 SD units, mean, +2 SD units). While the disability region (low, medium, and high disability) for which each individual item provided the most information remained quite stable between baseline and 22 months for ADL difficulty, IADL difficulty, and IADL dependence, relatively large shifts occurred for ADL dependence items. At the disability mean dependence items offered more information for assessment than difficulty. While ADLs also provided more information at -2 and +2 SD

  5. Accuracy of Bolton analysis measured in laser scanned digital models compared with plaster models (gold standard) and cone-beam computer tomography images

    PubMed Central

    Kim, Jooseong

    2016-01-01

    Objective The aim of this study was to compare the accuracy of Bolton analysis obtained from digital models scanned with the Ortho Insight three-dimensional (3D) laser scanner system to those obtained from cone-beam computed tomography (CBCT) images and traditional plaster models. Methods CBCT scans and plaster models were obtained from 50 patients. Plaster models were scanned using the Ortho Insight 3D laser scanner; Bolton ratios were calculated with its software. CBCT scans were imported and analyzed using AVIZO software. Plaster models were measured with a digital caliper. Data were analyzed with descriptive statistics and the intraclass correlation coefficient (ICC). Results Anterior and overall Bolton ratios obtained by the three different modalities exhibited excellent agreement (> 0.970). The mean differences between the scanned digital models and physical models and between the CBCT images and scanned digital models for overall Bolton ratios were 0.41 ± 0.305% and 0.45 ± 0.456%, respectively; for anterior Bolton ratios, 0.59 ± 0.520% and 1.01 ± 0.780%, respectively. ICC results showed that intraexaminer error reliability was generally excellent (> 0.858 for all three diagnostic modalities), with < 1.45% discrepancy in the Bolton analysis. Conclusions Laser scanned digital models are highly accurate compared to physical models and CBCT scans for assessing the spatial relationships of dental arches for orthodontic diagnosis. PMID:26877978

  6. Using Global Analysis to Extend the Accuracy and Precision of Binding Measurements with T cell Receptors and Their Peptide/MHC Ligands

    PubMed Central

    Blevins, Sydney J.; Baker, Brian M.

    2017-01-01

    In cellular immunity, clonally distributed T cell receptors (TCRs) engage complexes of peptides bound to major histocompatibility complex proteins (pMHCs). In the interactions of TCRs with pMHCs, regions of restricted and variable diversity align in a structurally complex fashion. Many studies have used mutagenesis to attempt to understand the “roles” played by various interface components in determining TCR recognition properties such as specificity and cross-reactivity. However, these measurements are often complicated or even compromised by the weak affinities TCRs maintain toward pMHC. Here, we demonstrate how global analysis of multiple datasets can be used to significantly extend the accuracy and precision of such TCR binding experiments. Application of this approach should positively impact efforts to understand TCR recognition and facilitate the creation of mutational databases to help engineer TCRs with tuned molecular recognition properties. We also show how global analysis can be used to analyze double mutant cycles in TCR-pMHC interfaces, which can lead to new insights into immune recognition. PMID:28197404

  7. Bayesian Meta-Analysis of the Accuracy of a Test for Tuberculous Pleuritis in the Absence of a Gold Standard Reference

    PubMed Central

    Dendukuri, Nandini; Schiller, Ian; Joseph, Lawrence; Pai, Madhukar

    2013-01-01

    Summary Absence of a perfect reference test is an acknowledged source of bias in diagnostic studies. In the case of tuberculous pleuritis, standard reference tests such as smear microscopy, culture and biopsy have poor sensitivity. Yet meta-analyses of new tests for this disease have always assumed the reference standard is perfect, leading to biased estimates of the new test’s accuracy. We describe a method for joint meta-analysis of sensitivity and specificity of the diagnostic test under evaluation, while considering the imperfect nature of the reference standard. We use a Bayesian hierarchical model that takes into account within- and between-study variability. We show how to obtain pooled estimates of sensitivity and specificity, and how to plot a hierarchical summary receiver operating characteristic curve. We describe extensions of the model to situations where multiple reference tests are used, and where index and reference tests are conditionally dependent. The performance of the model is evaluated using simulations and illustrated using data from a meta-analysis of nucleic acid amplification tests (NAATs) for tuberculous pleuritis. The estimate of NAAT specificity was higher and the sensitivity lower compared to a model that assumed that the reference test was perfect. PMID:22568612

  8. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  9. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  10. Characterization of Small Focal Renal Lesions: Diagnostic Accuracy with Single-Phase Contrast-enhanced Dual-Energy CT with Material Attenuation Analysis Compared with Conventional Attenuation Measurements.

    PubMed

    Marin, Daniele; Davis, Drew; Roy Choudhury, Kingshuk; Patel, Bhavik; Gupta, Rajan T; Mileto, Achille; Nelson, Rendon C

    2017-03-28

    Purpose To determine whether single-phase contrast material-enhanced dual-energy material attenuation analysis improves the characterization of small (1-4 cm) renal lesions compared with conventional attenuation measurements by using histopathologic analysis and follow-up imaging as the clinical reference standards. Materials and Methods In this retrospective, HIPAA-compliant, institutional review board-approved study, 136 consecutive patients (95 men and 41 women; mean age, 54 years) with 144 renal lesions (111 benign, 33 malignant) measuring 1-4 cm underwent single-energy unenhanced and contrast-enhanced dual-energy computed tomography (CT) of the abdomen. For each renal lesion, attenuation measurements were obtained; attenuation change of greater than or equal to 15 HU was considered evidence of enhancement. Dual-energy attenuation measurements were also obtained by using iodine-water, water-iodine, calcium-water, and water-calcium material basis pairs. Mean lesion attenuation values and material densities were compared between benign and malignant renal lesions by using the two-sample t test. Diagnostic accuracy of attenuation measurements and dual-energy material densities was assessed and validated by using 10-fold cross-validation to limit the effect of optimistic bias. Results By using cross-validated optimal thresholds at 100% sensitivity, iodine-water material attenuation images significantly improved specificity for differentiating between benign and malignant renal lesions compared with conventional enhancement measurements (93% [103 of 111]; 95% confidence interval: 86%, 97%; vs 81% [90 of 111]; 95% confidence interval: 73%, 88%) (P = .02). Sensitivity with iodine-water and calcium-water material attenuation images was also higher than that with conventional enhancement measurements, although the difference was not statistically significant. Conclusion Contrast-enhanced dual-energy CT with material attenuation analysis improves specificity for

  11. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  12. The effect of increased consumer demand on fees for aesthetic surgery: an economic analysis.

    PubMed

    Krieger, L M; Shaw, W W

    1999-12-01

    Economic theory dictates that changes in consumer demand have predictable effects on prices. Demographics represents an important component of demand for aesthetic surgery. Between the years of 1997 and 2010, the U.S. population is projected to increase by 12 percent. The population increase will be skewed such that those groups undergoing the most aesthetic surgery will see the largest increase. Accounting for the age-specific frequencies of aesthetic surgery and the population increase yields an estimate that the overall market for aesthetic surgery will increase by 19 percent. Barring unforeseen changes in general economic conditions or consumer tastes, demand should increase by an analogous amount. An economic demonstration shows the effects of increasing demand for aesthetic surgery on its fees. Between the years of 1992 and 1997, there was an increase in demand for breast augmentation as fears of associated autoimmune disorders subsided. Similarly, there was increased male acceptance of aesthetic surgery. The number of breast augmentations and procedures to treat male pattern baldness, plastic surgeons, and fees for the procedures were tracked. During the study period, the supply of surgeons and consumer demand increased for both of these procedures. Volume of breast augmentation increased by 275 percent, whereas real fees remained stable. Volume of treatment for male pattern baldness increased by 107 percent, and the fees increased by 29 percent. Ordinarily, an increase in supply leads to a decrease in prices. This did not occur during the study period. Economic analysis demonstrates that the increased supply of surgeons performing breast augmentation was offset by increased consumer demand for the procedure. For this reason, fees were not lowered. Similarly, increased demand for treatment of male pattern baldness more than offset the increased supply of surgeons performing it. The result was higher fees. Emphasis should be placed on using these economic

  13. Increasing habilitative services for persons with profound handicaps: an application of structural analysis to staff management.

    PubMed

    Green, C W; Reid, D H; Perkins, L I; Gardner, S M

    1991-01-01

    We evaluated a structural analysis methodology for enhancing the utility of a staff management program. In Experiment 1, a structural analysis of direct-care staff behavior in a mental retardation facility revealed differences in work patterns over time. Specific times were identified when few basic care duties were necessary and staff engaged in nonwork activity. In Experiment 2, a management program was implemented to increase staff members' training activities during periods identified through the structural analysis. The program was accompanied by increases in training activities and decreases in nonwork behavior. The improvements were maintained during a 43-week period while the most labor-intensive component of the program was withdrawn. Staff acceptability measures indicated a positive response to the management intervention, although responses varied across components within the multifaceted program. The increased training was accompanied by beneficial changes among clients with profound handicaps. Results are discussed regarding practical considerations for improving staff performance and for adopting innovations resulting from applied research.

  14. Evaluating the accuracy of selenodesic reference grids

    NASA Technical Reports Server (NTRS)

    Koptev, A. A.

    1974-01-01

    Estimates were made of the accuracy of reference point grids using the technique of calculating the errors from theoretical analysis. Factors taken into consideration were: telescope accuracy, number of photographs, and libration amplitude. To solve the problem, formulas were used for the relationship between the coordinates of lunar surface points and their images on the photograph.

  15. Characterizing accuracy of total hemoglobin recovery using contrast-detail analysis in 3D image-guided near infrared spectroscopy with the boundary element method

    PubMed Central

    Ghadyani, Hamid R.; Srinivasan, Subhadra; Pogue, Brian W.; Paulsen, Keith D.

    2010-01-01

    The quantification of total hemoglobin concentration (HbT) obtained from multi-modality image-guided near infrared spectroscopy (IG-NIRS) was characterized using the boundary element method (BEM) for 3D image reconstruction. Multi-modality IG-NIRS systems use a priori information to guide the reconstruction process. While this has been shown to improve resolution, the effect on quantitative accuracy is unclear. Here, through systematic contrast-detail analysis, the fidelity of IG-NIRS in quantifying HbT was examined using 3D simulations. These simulations show that HbT could be recovered for medium sized (20mm in 100mm total diameter) spherical inclusions with an average error of 15%, for the physiologically relevant situation of 2:1 or higher contrast between background and inclusion. Using partial 3D volume meshes to reduce the ill-posed nature of the image reconstruction, inclusions as small as 14mm could be accurately quantified with less than 15% error, for contrasts of 1.5 or higher. This suggests that 3D IG-NIRS provides quantitatively accurate results for sizes seen early in treatment cycle of patients undergoing neoadjuvant chemotherapy when the tumors are larger than 30mm. PMID:20720975

  16. Thermodynamics of protein-ligand interactions as a reference for computational analysis: how to assess accuracy, reliability and relevance of experimental data

    NASA Astrophysics Data System (ADS)

    Krimmer, Stefan G.; Klebe, Gerhard

    2015-09-01

    For a conscientious interpretation of thermodynamic parameters (Gibbs free energy, enthalpy and entropy) obtained by isothermal titration calorimetry (ITC), it is necessary to first evaluate the experimental setup and conditions at which the data were measured. The data quality must be assessed and the precision and accuracy of the measured parameters must be estimated. This information provides the basis at which level discussion of the data is appropriate, and allows insight into the significance of comparisons with other data. The aim of this article is to provide the reader with basic understanding of the ITC technique and the experimental practices commonly applied, in order to foster an appreciation for how much measured thermodynamic parameters can deviate from ideal, error-free values. Particular attention is paid to the shape of the recorded isotherm ( c-value), the influence of the applied buffer used for the reaction (protonation reactions, pH), the chosen experimental settings (temperature), impurities of protein and ligand, sources of systematic errors (solution concentration, solution activity, and device calibration) and to the applied analysis software. Furthermore, we comment on enthalpy-entropy compensation, heat capacities and van't Hoff enthalpies.

  17. Investigation of the Matrix Effect on the Accuracy of Quantitative Analysis of Trace Metals in Liquids Using Laser-Induced Breakdown Spectroscopy with Solid Substrates.

    PubMed

    Xiu, Junshan; Dong, Lili; Qin, Hua; Liu, Yunyan; Yu, Jin

    2016-12-01

    The detection limit of trace metals in liquids has been improved greatly by laser-induced breakdown spectroscopy (LIBS) using solid substrate. A paper substrate and a metallic substrate were used as a solid substrate for the detection of trace metals in aqueous solutions and viscous liquids (lubricating oils) respectively. The matrix effect on quantitative analysis of trace metals in two types of liquids was investigated. For trace metals in aqueous solutions using paper substrate, the calibration curves established for pure solutions and mixed solutions samples presented large variation on both the slope and the intercept for the Cu, Cd, and Cr. The matrix effects among the different elements in mixed solutions were observed. However, good agreement was obtained between the measured and known values in real wastewater. For trace metals in lubricating oils, the matrix effect between the different oils is relatively small and reasonably negligible under the conditions of our experiment. A universal calibration curve can be established for trace metals in different types of oils. The two approaches are verified that it is possible to develop a feasible and sensitive method with accuracy results for rapid detection of trace metals in industrial wastewater and viscous liquids by laser-induced breakdown spectroscopy.

  18. Association between increase in fixed penalties and road safety outcomes: A meta-analysis.

    PubMed

    Elvik, Rune

    2016-07-01

    Studies that have evaluated the association between increases in traffic fine amounts (fixed penalties) and changes in compliance with road traffic law or the number of accidents are synthesised by means of meta-analysis. The studies were few and different in many respects. Nine studies were included in the meta-analysis of changes in compliance. Four studies were included in the meta-analysis of changes in accidents. Increasing traffic fines was found to be associated with small changes in the rate of violations. The changes were non-linear. For increases up to about 100%, violations were reduced. For larger increases, no reduction in violations was found. A small reduction in fatal accidents was associated with increased fixed penalties, varying between studies from less than 1-12%. The main pattern of changes in violations was similar in the fixed-effects and random-effects models of meta-analysis, meta-regression and when simple (non-weighted) mean values were computed. The main findings are thus robust, although most of the primary studies did not control very well for potentially confounding factors. Summary estimates of changes in violations or accidents should be treated as provisional and do not necessarily reflect causal relationships.

  19. Integrated High Accuracy Portable Metrology for Large Scale Structural Testing

    NASA Astrophysics Data System (ADS)

    Klaas, Andrej; Richardson, Paul; Burguete, Richard; Harris, Linden

    2014-06-01

    As the performance and accuracy of analysis tools increases bespoke solutions are more regularly being requested to perform high-accuracy measurement on structural tests to validate these methods. These can include optical methods and full-field techniques in place of the more traditional point measurements. As each test is unique it presents its own individual challenges.In this paper two recent, large scale tests performed by Airbus, will be presented and the metrology solutions that were identified for them will be discussed.

  20. Using judgement to improve accuracy in decision-making.

    PubMed

    Dowding, Dawn; Thompson, Carl

    Nursing judgements are complex, often involving the need to process a large number of information cues. Key issues include how accurate they are and how we can improve levels of accuracy. Traditional approaches to the study of nursing judgement, characterised by qualitative and descriptive research, have provided valuable insights into the nature of expert nursing practice and the complexity of practice. However, they have largely failed to provide the data needed to address judgement accuracy. Social judgement analysis approaches are one way of overcoming these limitations. This paper argues that as nurses take on more roles requiring accurate judgement, it is time to increase our knowledge of judgement and ways to improve it.

  1. Inventory accuracy in 60 days!

    PubMed

    Miller, G J

    1997-08-01

    Despite great advances in manufacturing technology and management science, thousands of organizations still don't have a handle on basic inventory accuracy. Many companies don't even measure it properly, or at all, and lack corrective action programs to improve it. This article offers an approach that has proven successful a number of times, when companies were quite serious about making improvements. Not only can it be implemented, but also it can likely be implemented within 60 days per area, if properly managed. The hardest part is selling people on the need to improve and then keeping them motivated. The net cost of such a program? Probably less than nothing, since the benefits gained usually far exceed the costs. Improved inventory accuracy can aid in enhancing customer service, determining purchasing and manufacturing priorities, reducing operating costs, and increasing the accuracy of financial records. This article also addresses the gap in contemporary literature regarding accuracy program features for repetitive, JIT, cellular, and process- and project-oriented environments.

  2. Analysis of Orbital Prediction Accuracy Improvements Using High Fidelity Physical Solar Radiation Pressure Models for Tracking High Area-to-Mass Ratio Objects

    NASA Astrophysics Data System (ADS)

    Kelecy, Tom; Jah, Moriba

    2009-03-01

    Inactive high area-to-mass ratio (A/m) resident space objects (RSOs) in the geosynchronous orbit (GEO) regime pose a hazard to active GEO RSOs. This attribute results in their increased sensitivity to non-conservative force effects manifested as perturbations of mean motion, inclination and eccentricity. This work examines the sensitivity of the trajectory prediction accuracies to various fidelities of complexity in the modeling of the SRP acceleration contributions to the overall dynamics. A physics-based solar radiation pressure model which includes the effects of refraction and absorption from the Earth's atmosphere during penumbral transitions is implemented. Additionally, variations in the area with respect to the sun are examined using representative orbits with associated eclipsing cycles. The trajectory prediction errors from combined modeling errors show significant growth consistent with loss of tracking. The errors are, in general, non normally distributed given their rejection of the null hypothesis to a standard normal distribution in various normality tests. This contributes to the prediction errors through errors in the orbit determination assumptions.

  3. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  4. Predictive accuracy of amyloid imaging for progression from mild cognitive impairment to Alzheimer disease with different lengths of follow-up: a meta-analysis. [Corrected].

    PubMed

    Ma, Yan; Zhang, Shuo; Li, Jing; Zheng, Dong-Ming; Guo, Yang; Feng, Juan; Ren, Wei-Dong

    2014-12-01

    In the past decade, amyloid deposition has been shown to begin many years before the clinical symptoms of dementia in mild cognitive impairment (MCI) due to Alzheimer disease (AD). Longitudinal studies with different follow-up durations have suggested that C-Pittsburgh compound B positron emission tomography (C-PIB-PET) may play a role in stratifying patients with MCI into risk levels for developing AD. However, the predictive accuracy of amyloid imaging for the progression from MCI to AD with different follow-up durations has not yet been systematically evaluated. A formal systematic evaluation of the sensitivity, specificity, and other properties of C-PIB-PET was performed.This study aimed to systematically review and meta-analyze published data on the diagnostic performance of C-PIB-PET for predicting conversion to AD in patients with MCI and to determine whether long-term follow-up has a positive effect on predictive accuracy. Relevant studies were systematically identified through electronic searches, which were performed in MEDLINE (OvidSP), EMBASE (OvidSP), BIOSIS Previews (ISI Web of Knowledge), Science Citation Index (ISI Web of Knowledge), PsycINFO (Ovid SP), and LILACS (Bireme). The methodological quality of each study was assessed by QUADAS-2. Sensitivities and specificities of C-PIB-PET in individual studies were calculated, and the studies underwent meta-analysis with a random-effects model. A summary receiver-operating characteristic curve (SROC) was constructed with the Moses-Shapiro-Littenberg method. Pooled estimates of sensitivity, specificity, positive likelihood ratio (LR+), negative likelihood ratio (LR-), diagnostic odds ratio (DOR), and the SROC curve of each subgroup were determined. Heterogeneity was tested, and potential sources for heterogeneity were explored by assessing whether certain covariates significantly influenced the relative DOR.Eleven eligible studies consisting of a total of 352 patients with MCI at baseline were included

  5. The diagnostic accuracy of the natriuretic peptides in heart failure: systematic review and diagnostic meta-analysis in the acute care setting

    PubMed Central

    Roberts, Emmert; Dworzynski, Katharina; Al-Mohammad, Abdallah; Cowie, Martin R; McMurray, John J V; Mant, Jonathan

    2015-01-01

    Objectives To determine and compare the diagnostic accuracy of serum natriuretic peptide levels (B type natriuretic peptide, N terminal probrain natriuretic peptide (NTproBNP), and mid-regional proatrial natriuretic peptide (MRproANP)) in people presenting with acute heart failure to acute care settings using thresholds recommended in the 2012 European Society of Cardiology guidelines for heart failure. Design Systematic review and diagnostic meta-analysis. Data sources Medline, Embase, Cochrane central register of controlled trials, Cochrane database of systematic reviews, database of abstracts of reviews of effects, NHS economic evaluation database, and Health Technology Assessment up to 28 January 2014, using combinations of subject headings and terms relating to heart failure and natriuretic peptides. Eligibility criteria for selecting studies Eligible studies evaluated one or more natriuretic peptides (B type natriuretic peptide, NTproBNP, or MRproANP) in the diagnosis of acute heart failure against an acceptable reference standard in consecutive or randomly selected adults in an acute care setting. Studies were excluded if they did not present sufficient data to extract or calculate true positives, false positives, false negatives, and true negatives, or report age independent natriuretic peptide thresholds. Studies not available in English were also excluded. Results 37 unique study cohorts described in 42 study reports were included, with a total of 48 test evaluations reporting 15 263 test results. At the lower recommended thresholds of 100 ng/L for B type natriuretic peptide and 300 ng/L for NTproBNP, the natriuretic peptides have sensitivities of 0.95 (95% confidence interval 0.93 to 0.96) and 0.99 (0.97 to 1.00) and negative predictive values of 0.94 (0.90 to 0.96) and 0.98 (0.89 to 1.0), respectively, for a diagnosis of acute heart failure. At the lower recommended threshold of 120 pmol/L, MRproANP has a sensitivity ranging from 0.95 (range 0

  6. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  7. SACRIFICING THE ECOLOGICAL RESOLUTION OF VEGETATION MAPS AT THE ALTAR OF THEMATIC ACCURACY: ASSESSED MAP ACCURACIES FOR HIERARCHICAL VEGETATION CLASSIFICATIONS IN THE EASTERN GREAT BASIN OF THE SOUTHWEST REGIONAL GAP ANALYSIS PROJECT (SW REGAP)

    EPA Science Inventory

    The Southwest Regional Gap Analysis Project (SW ReGAP) improves upon previous GAP projects conducted in Arizona, Colorado, Nevada, New Mexico, and Utah to provide a
    consistent, seamless vegetation map for this large and ecologically diverse geographic region. Nevada's compone...

  8. Increasing Pizza Box Assembly Using Task Analysis and a Least-to-Most Prompting Hierarchy

    ERIC Educational Resources Information Center

    Stabnow, Erin F.

    2015-01-01

    This study was designed to use a task analysis and a least-to-most prompting hierarchy to teach students with cognitive disabilities pizza box assembly skills. The purpose of this study was to determine whether a least-to-most prompting hierarchy was effective in teaching students with cognitive disabilities to increase the number of task-analyzed…

  9. Newspaper Content Analysis in Evaluation of a Community-Based Participatory Project to Increase Physical Activity

    ERIC Educational Resources Information Center

    Granner, Michelle L.; Sharpe, Patricia A.; Burroughs, Ericka L.; Fields, Regina; Hallenbeck, Joyce

    2010-01-01

    This study conducted a newspaper content analysis as part of an evaluation of a community-based participatory research project focused on increasing physical activity through policy and environmental changes, which included activities related to media advocacy and media-based community education. Daily papers (May 2003 to December 2005) from both…

  10. Interventions to Increase Attendance at Psychotherapy: A Meta-Analysis of Randomized Controlled Trials

    ERIC Educational Resources Information Center

    Oldham, Mary; Kellett, Stephen; Miles, Eleanor; Sheeran, Paschal

    2012-01-01

    Objective: Rates of nonattendance for psychotherapy hinder the effective delivery of evidence-based treatments. Although many strategies have been developed to increase attendance, the effectiveness of these strategies has not been quantified. Our aim in the present study was to undertake a meta-analysis of rigorously controlled studies to…

  11. The Accuracy of Diagnostic Tests for Lyme Disease in Humans, A Systematic Review and Meta-Analysis of North American Research

    PubMed Central

    Lindsay, Robbin; Ogden, Nicholas

    2016-01-01

    There has been an increasing incidence of Lyme disease (LD) in Canada and the United States corresponding to the expanding range of the Ixodes tick vector and Lyme disease agent (Borrelia burgdorferi sensu stricto). There are many diagnostic tests for LD available in North America, all of which have some performance issues, and physicians are concerned about the appropriate use and interpretation of these tests. The objective of this systematic review is to summarize the North American evidence on the accuracy of diagnostic tests and test regimes at various stages of LD. Included in the review are 48 studies on diagnostic tests used in North America published since 1995. Thirteen studies examined a two-tier serological test protocol vs. clinical diagnosis, 24 studies examined single assays vs. clinical diagnosis, 9 studies examined single immunoblot vs. clinical diagnosis, 7 studies compared culture or PCR direct detection methods vs. clinical diagnosis, 22 studies compared two or more tests with each other and 8 studies compared a two-tiered serological test protocol to another test. Recent studies examining the sensitivity and specificity of various test protocols noted that the Immunetics® C6 B. burgdorferi ELISA™ and the two tier approach have superior specificity compared to proposed replacements, and the CDC recommended western blot algorithm has equivalent or superior specificity over other proposed test algorithms. There is a dramatic increase in test sensitivity with progression of B. burgdorferi infection from early to late LD. Direct detection methods, culture and PCR of tissue or blood samples were not as sensitive or timely compared to serological testing. It was also noted that there are a large number of both commercial (n = 42) and in-house developed tests used by private laboratories which have not been evaluated in the primary literature. PMID:28002488

  12. Accuracy of Free Hand Pedicle Screw Installation in the Thoracic and Lumbar Spine by a Young Surgeon: An Analysis of the First Consecutive 306 Screws Using Computed Tomography

    PubMed Central

    Lee, Chang-Hyun; Kim, Yongjung J; Kim, Ki-Jeong; Jahng, Tae-Ahn; Kim, Hyun-Jib

    2014-01-01

    Study Design A retrospective cross-sectional study. Purpose The purpose of this study is to evaluate the accuracy and safety of free-hand pedicle screw insertion performed by a young surgeon. Overview of Literature Few articles exist regarding the safety of the free-hand technique without inspection by an experienced spine surgeon. Methods The index surgeon has performed spinal surgery for 2 years by himself. He performed fluoroscopy-assisted pedicle screw installation for his first year. Since then, he has used the free-hand technique. We retrospectively reviewed the records of all consecutive patients undergoing pedicle screw installation using the free-hand technique without fluoroscopy in the thoracic or lumbar spine by the index surgeon. Incidence and extent of cortical breach by misplaced pedicle screw was determined by a review of postoperative computed tomography (CT) images. Results A total of 36 patients received 306 free-hand placed pedicle screws in the thoracic or lumbar spine. A total of 12 screws (3.9%) were identified as breaching the pedicle in 9 patients. Upper thoracic spine was the most frequent location of screw breach (10.8%). Lateral breach (2.3%) was more frequent than any other direction. Screw breach on the right side (9 patients) was more common than that on the left side (3 patients) (p<0.01). Conclusions An analysis by CT scan shows that young spine surgeons who have trained under the supervision of an experienced surgeon can safely place free-hand pedicle screws with an acceptable breach rate through repetitive confirmatory steps. PMID:24967036

  13. Maternal smoking and increased risk of sudden infant death syndrome: a meta-analysis.

    PubMed

    Zhang, Kui; Wang, Xianmin

    2013-05-01

    Maternal smoking is detrimental to the development of fetuses and neonates. This meta-analysis was performed to measure the accumulated association of sudden infant death syndrome (SIDS) risk with both prenatal and postnatal maternal smoking. The odds ratio (OR) corresponding to the 95% confidence interval (CI) was used to assess the associations between maternal smoking and SIDS risk. The statistical heterogeneity among studies was assessed with the Q-test and I(2) statistics. The data for this meta-analysis were available from 35 case-control studies. The prenatal and postnatal maternal smoking was associated with a significantly increased risk of SIDS (OR=2.25, 95% CI=2.03-2.50 for prenatal maternal smoking analysis, and OR=1.97, 95% CI=1.77-2.19 for postnatal maternal smoking analysis, respectively) by random effects model. After stratified analyses, regardless of prenatal or postnatal smoking, heavy cigarette consumption increased the risk of SIDS and significantly elevated SIDS risk was found to be associated with co-sleeping with postnatal smoking mothers. Our results suggested that maternal smoking were associated with elevated SIDS risk, the effects were dose-dependent. In addition, SIDS risk was significantly increased in infants co-sleeping with postnatal smoking mothers.

  14. The Effect of Molecular Conformation on the Accuracy of Theoretical (1)H and (13)C Chemical Shifts Calculated by Ab Initio Methods for Metabolic Mixture Analysis.

    PubMed

    Chikayama, Eisuke; Shimbo, Yudai; Komatsu, Keiko; Kikuchi, Jun

    2016-04-14

    NMR spectroscopy is a powerful method for analyzing metabolic mixtures. The information obtained from an NMR spectrum is in the form of physical parameters, such as chemical shifts, and construction of databases for many metabolites will be useful for data interpretation. To increase the accuracy of theoretical chemical shifts for development of a database for a variety of metabolites, the effects of sets of conformations (structural ensembles) and the levels of theory on computations of theoretical chemical shifts were systematically investigated for a set of 29 small molecules in the present study. For each of the 29 compounds, 101 structures were generated by classical molecular dynamics at 298.15 K, and then theoretical chemical shifts for 164 (1)H and 123 (13)C atoms were calculated by ab initio quantum chemical methods. Six levels of theory were used by pairing Hartree-Fock, B3LYP (density functional theory), or second order Møller-Plesset perturbation with 6-31G or aug-cc-pVDZ basis set. The six average fluctuations in the (1)H chemical shift were ±0.63, ± 0.59, ± 0.70, ± 0.62, ± 0.75, and ±0.66 ppm for the structural ensembles, and the six average errors were ±0.34, ± 0.27, ± 0.32, ± 0.25, ± 0.32, and ±0.25 ppm. The results showed that chemical shift fluctuations with changes in the conformation because of molecular motion were larger than the differences between computed and experimental chemical shifts for all six levels of theory. In conclusion, selection of an appropriate structural ensemble should be performed before theoretical chemical shift calculations for development of an accurate database for a variety of metabolites.

  15. Which one is a valuable surrogate for predicting survival between Tomita and Tokuhashi scores in patients with spinal metastases? A meta-analysis for diagnostic test accuracy and individual participant data analysis.

    PubMed

    Lee, Chang-Hyun; Chung, Chun Kee; Jahng, Tae-Ahn; Kim, Ki-jeong; Kim, Chi Heon; Hyun, Seung-Jae; Kim, Hyun-Jib; Jeon, Sang Ryong; Chang, Ung-Kyu; Lee, Sun-Ho; Moon, Seong-Hwan; Majeed, Haroon; Zhang, Dan; Gravis, Gwenaelle; Wibmer, Christine; Kumar, Naresh; Moon, Kyung Yun; Park, Jin Hoon; Tabouret, Emeline; Fuentes, Stephane

    2015-06-01

    This study is to estimate the diagnostic accuracy of Tokuhashi and Tomita scores that assures 6-month predicting survival regarded as a standard of surgical treatment. We searched PubMed, EMBASE, European PubMed central, and the Cochrane library for papers about the sensitivities and specificities of the Tokuhashi and/or Tomita scores to estimate predicting survival. Studies with cut-off values of ≥9 for Tokuhashi and ≤7 for Tomita scores based on prior studies were enrolled. Sensitivity, specificity, diagnostic odds ratio (DOR), area under the curve (AUC), and the best cut-off value were calculated via meta-analysis and individual participant data analysis. Finally, 22 studies were enrolled in the meta-analysis, and 1095 patients from 8 studies were included in the individual data analysis. In the meta-analysis, the pooled sensitivity/specificity/DOR for 6-month survival were 57.7 %/76.6 %/4.70 for the Tokuhashi score and 81.8 %/47.8 %/4.93 for Tomita score. The AUC of summary receiver operating characteristic plots was 0.748 for the Tokuhashi score and 0.714 for the Tomita score. Although Tokuhashi score was more accurate than Tomita score slightly, both showed low accuracy to predict 6 months residual survival. Moreover, the best cut-off values of Tokuhashi and Tomita scores were 8 and 6, not 9 and 7, for predicting 6-month survival, respectively. Estimation of 6-month predicting survival to decide surgery in patients with spinal metastasis is quite limited by using Tokuhashi and Tomita scores alone. Tokuhashi and Tomita scores could be incorporated as part of a multidisciplinary approach or perhaps interpreted in the context of a multidisciplinary approach.

  16. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    NASA Astrophysics Data System (ADS)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  17. Finite Element Analysis Generates an Increasing Interest in Dental Research: A Bibliometric Study

    PubMed Central

    Diarra, Abdoulaziz; Mushegyan, Vagan; Naveau, Adrien

    2016-01-01

    Purpose: The purpose was to provide a longitudinal overview of published studies that use finite element analysis in dental research, by using the SCI-expanded database of Web of Science® (Thomson Reuters). Material and Methods: Eighty publications from 1999-2000 and 473 from 2009-2010 were retrieved. This literature grew faster than the overall dental literature. The number of publishing countries doubled. The main journals were American or English, and dealt with implantology. For the top 10 journals publishing dental finite element papers, the mean impact factor increased by 75% during the decade. Results: Finite elements generate an increasing interest from dental authors and publishers worldwide. PMID:27006722

  18. Diagnostic accuracy of the Berlin questionnaire, STOP-BANG, STOP, and Epworth sleepiness scale in detecting obstructive sleep apnea: A bivariate meta-analysis.

    PubMed

    Chiu, Hsiao-Yean; Chen, Pin-Yuan; Chuang, Li-Pang; Chen, Ning-Hung; Tu, Yu-Kang; Hsieh, Yu-Jung; Wang, Yu-Chi; Guilleminault, Christian

    2016-11-05

    Obstructive sleep apnea (OSA) is a highly prevalent sleep disorder; however, it remains underdiagnosed and undertreated. Although screening tools such as the Berlin questionnaire (BQ), STOP-BANG questionnaire (SBQ), STOP questionnaire (STOP), and Epworth sleepiness scale (ESS) are widely used for OSA, the findings regarding their diagnostic accuracy are controversial. Therefore, this meta-analysis investigated and compared the summary sensitivity, specificity, and diagnostic odds ratio (DOR) among the BQ, SBQ, STOP, and ESS according to the severity of OSA. Electronic databases, namely the Embase, PubMed, PsycINFO, ProQuest dissertations and theses A&I databases, and China knowledge resource integrated database, were searched from their inception to July 15, 2016. We included studies examining the sensitivity and specificity of the BQ, SBQ, STOP, and ESS against the apnea-hypopnea index (AHI) or respiratory disturbance index (RDI). The revised quality assessment of diagnostic accuracy studies was used to evaluate the methodological quality of studies. A random-effects bivariate model was used to estimate the summary sensitivity, specificity, and DOR of the tools. We identified 108 studies including a total of 47 989 participants. The summary estimates were calculated for the BQ, SBQ, STOP, and ESS in detecting mild (AHI/RDI ≥ 5 events/h), moderate (AHI/RDI ≥ 15 events/h), and severe OSA (AHI/RDI ≥ 30 events/h). The performance levels of the BQ, SBQ, STOP, and ESS in detecting OSA of various severity levels are outlined as follows: for mild OSA, the pooled sensitivity levels were 76%, 88%, 87%, and 54%; pooled specificity levels were 59%, 42%, 42%, and 65%; and pooled DORs were 4.30, 5.13, 4.85, and 2.18, respectively. For moderate OSA, the pooled sensitivity levels were 77%, 90%, 89%, and 47%; pooled specificity levels were 44%, 36%, 32%, and 621%; and pooled DORs were 2.68, 5.05, 3.71, and 1.45, respectively. For severe OSA, the pooled sensitivity

  19. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  20. Aggravated phosphorus limitation on biomass production under increasing nitrogen loading: a meta-analysis.

    PubMed

    Li, Yong; Niu, Shuli; Yu, Guirui

    2016-02-01

    Nitrogen (N) and phosphorus (P), either individually or in combination, have been demonstrated to limit biomass production in terrestrial ecosystems. Field studies have been extensively synthesized to assess global patterns of N impacts on terrestrial ecosystem processes. However, to our knowledge, no synthesis has been done so far to reveal global patterns of P impacts on terrestrial ecosystems, especially under different nitrogen (N) levels. Here, we conducted a meta-analysis of impacts of P addition, either alone or with N addition, on aboveground (AGB) and belowground biomass production (BGB), plant and soil P concentrations, and N : P ratio in terrestrial ecosystems. Overall, our meta-analysis quantitatively confirmed existing notions: (i) colimitation of N and P on biomass production and (ii) more P limitation in tropical forest than other ecosystems. More importantly, our analysis revealed new findings: (i) P limitation on biomass production was aggravated by N enrichment and (ii) plant P concentration was a better indicator of P limitation than soil P availability. Specifically, P addition increased AGB and BGB by 34% and 13%, respectively. The effect size of P addition on biomass production was larger in tropical forest than grassland, wetland, and tundra and varied with P fertilizer forms, P addition rates, or experimental durations. The P-induced increase in biomass production and plant P concentration was larger under elevated than ambient N. Our findings suggest that the global limitation of P on biomass production will become severer under increasing N fertilizer and deposition in the future.

  1. Sound source localization identification accuracy: bandwidth dependencies.

    PubMed

    Yost, William A; Zhong, Xuan

    2014-11-01

    Sound source localization accuracy using a sound source identification task was measured in the front, right quarter of the azimuth plane as rms (root-mean-square) error (degrees) for stimulus conditions in which the bandwidth (1/20 to 2 octaves wide) and center frequency (250, 2000, 4000 Hz) of 200-ms noise bursts were varied. Tones of different frequencies (250, 2000, 4000 Hz) were also used. As stimulus bandwidth increases, there is an increase in sound source localization identification accuracy (i.e., rms error decreases). Wideband stimuli (>1 octave wide) produce best sound source localization accuracy (~6°-7° rms error), and localization accuracy for these wideband noise stimuli does not depend on center frequency. For narrow bandwidths (<1 octave) and tonal stimuli, accuracy does depend on center frequency such that highest accuracy is obtained for low-frequency stimuli (centered on 250 Hz), worse accuracy for mid-frequency stimuli (centered on 2000 Hz), and intermediate accuracy for high-frequency stimuli (centered on 4000 Hz).

  2. Development of C-reactive protein certified reference material NMIJ CRM 6201-b: optimization of a hydrolysis process to improve the accuracy of amino acid analysis.

    PubMed

    Kato, Megumi; Kinumi, Tomoya; Yoshioka, Mariko; Goto, Mari; Fujii, Shin-Ichiro; Takatsu, Akiko

    2015-04-01

    To standardize C-reactive protein (CRP) assays, the National Metrology Institute of Japan (NMIJ) has developed a C-reactive protein solution certified reference material, CRM 6201-b, which is intended for use as a primary reference material to enable the SI-traceable measurement of CRP. This study describes the development process of CRM 6201-b. As a candidate material of the CRM, recombinant human CRP solution was selected because of its higher purity and homogeneity than the purified material from human serum. Gel filtration chromatography was used to examine the homogeneity and stability of the present CRM. The total protein concentration of CRP in the present CRM was determined by amino acid analysis coupled to isotope-dilution mass spectrometry (IDMS-AAA). To improve the accuracy of IDMS-AAA, we optimized the hydrolysis process by examining the effect of parameters such as the volume of protein samples taken for hydrolysis, the procedure of sample preparation prior to the hydrolysis, hydrolysis temperature, and hydrolysis time. Under optimized conditions, we conducted two independent approaches in which the following independent hydrolysis and liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) were combined: one was vapor-phase acid hydrolysis (130 °C, 24 h) and hydrophilic interaction liquid chromatography-mass spectrometry (HILIC-MS) method, and the other was microwave-assisted liquid-phase acid hydrolysis (150 °C, 3 h) and pre-column derivatization liquid chromatography-tandem mass spectrometry (LC-MS/MS) method. The quantitative values of the two different amino acid analyses were in agreement within their uncertainties. The certified value was the weighted mean of the results of the two methods. Uncertainties from the value-assignment method, between-method variance, homogeneity, long-term stability, and short-term stability were taken into account in evaluating the uncertainty for a certified value. The certified value and the

  3. VLT/SPHERE robust astrometry of the HR8799 planets at milliarcsecond-level accuracy. Orbital architecture analysis with PyAstrOFit

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Absil, O.; Gómez González, C. A.; Milli, J.; Girard, J. H.; Mawet, D.; Pueyo, L.

    2017-02-01

    Context. HR8799 is orbited by at least four giant planets, making it a prime target for the recently commissioned Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE). As such, it was observed on five consecutive nights during the SPHERE science verification in December 2014. Aims: We aim to take full advantage of the SPHERE capabilities to derive accurate astrometric measurements based on H-band images acquired with the Infra-Red Dual-band Imaging and Spectroscopy (IRDIS) subsystem, and to explore the ultimate astrometric performance of SPHERE in this observing mode. We also aim to present a detailed analysis of the orbital parameters for the four planets. Methods: We performed thorough post-processing of the IRDIS images with the Vortex Imaging Processing (VIP) package to derive a robust astrometric measurement for the four planets. This includes the identification and careful evaluation of the different contributions to the error budget, including systematic errors. Combining our astrometric measurements with the ones previously published in the literature, we constrain the orbital parameters of the four planets using PyAstrOFit, our new open-source python package dedicated to orbital fitting using Bayesian inference with Monte-Carlo Markov Chain sampling. Results: We report the astrometric positions for epoch 2014.93 with an accuracy down to 2.0 mas, mainly limited by the astrometric calibration of IRDIS. For each planet, we derive the posterior probability density functions for the six Keplerian elements and identify sets of highly probable orbits. For planet d, there is clear evidence for nonzero eccentricity (e 0.35), without completely excluding solutions with smaller eccentricities. The three other planets are consistent with circular orbits, although their probability distributions spread beyond e = 0.2, and show a peak at e ≃ 0.1 for planet e. The four planets have consistent inclinations of approximately 30° with respect to the sky

  4. Genetic variants and increased risk of meningioma: an updated meta-analysis

    PubMed Central

    Han, Xiao-Yong; Wang, Wei; Wang, Lei-Lei; Wang, Xi-Rui; Li, Gang

    2017-01-01

    Purpose Various genetic variants have been reported to be linked to an increased risk of meningioma. However, no confirmed conclusion has been obtained. The purpose of the study was to investigate potential meningioma-associated gene polymorphisms, based on published evidence. Materials and methods An updated meta-analysis was performed in September 2016. After electronic database searching and study screening, we selected eligible case-control studies and extracted data for meta-analysis, using Mantel–Haenszel statistics. P-values, pooled odds ratios (ORs), and 95% confidence intervals were calculated. Results We finally selected eight genes with ten polymorphisms: MLLT10 rs12770228, CASP8 rs1045485, XRCC1 rs1799782, rs25487, MTHFR rs1801133, rs1801131, MTRR rs1801394, MTR rs1805087, GSTM1 null/present, and GSTT1 null/present. Results of meta-analyses showed that there was increased meningioma risk in case groups under all models of MLLT10 rs12770228 (all OR >1, P<0.001), compared with control groups. Similar results were observed under the allele, homozygote, dominant, and recessive models of MTRR rs1801394 (all OR >1, P<0.05), and the heterozygote and dominant models of MTHFR rs1801131 in the Caucasian population (all OR >1, P<0.05). However, no significantly increased meningioma risks were observed for CASP8 rs1045485, XRCC1 rs25487, rs1799782, MTHFR rs1801133, MTR rs1805087, or GSTM1/GSTT1 null mutations. Conclusion Our updated meta-analysis provided statistical evidence for the role of MLLT10 rs12770228, MTRR rs1801394, and MTHFR rs1801131 in increased susceptibility to meningioma.

  5. High Birth Weight Increases the Risk for Bone Tumor: A Systematic Review and Meta-Analysis.

    PubMed

    Chen, Songfeng; Yang, Lin; Pu, Feifei; Lin, Hui; Wang, Baichuan; Liu, Jianxiang; Shao, Zengwu

    2015-09-09

    There have been several epidemiologic studies on the relationship between high birth weight and the risk for bone tumor in the past decades. However, due to the rarity of bone tumors, the sample size of individual studies was generally too small for reliable conclusions. Therefore, we have performed a meta-analysis to pool all published data on electronic databases with the purpose to clarify the potential relationship. According to the inclusion and exclusion criteria, 18 independent studies with more than 2796 cases were included. As a result, high birth weight was found to increase the risk for bone tumor with an Odds Ratio (OR) of 1.13, with the 95% confidence interval (95% CI) ranging from 1.01 to 1.27. The OR of bone tumor for an increase of 500 gram of birth weight was 1.01 (95% CI 1.00-1.02; p = 0.048 for linear trend). Interestingly, individuals with high birth weight had a greater risk for osteosarcoma (OR = 1.22, 95% CI 1.06-1.40, p = 0.006) than those with normal birth weight. In addition, in the subgroup analysis by geographical region, elevated risk was detected among Europeans (OR = 1.14, 95% CI 1.00-1.29, p = 0.049). The present meta-analysis supported a positive association between high birth weight and bone tumor risk.

  6. Cooperative Genome-Wide Analysis Shows Increased Homozygosity in Early Onset Parkinson's Disease

    PubMed Central

    Nalls, Michael A.; Martinez, Maria; Schulte, Claudia; Holmans, Peter; Gasser, Thomas; Hardy, John; Singleton, Andrew B.; Wood, Nicholas W.; Brice, Alexis; Heutink, Peter; Williams, Nigel; Morris, Huw R.

    2012-01-01

    Parkinson's disease (PD) occurs in both familial and sporadic forms, and both monogenic and complex genetic factors have been identified. Early onset PD (EOPD) is particularly associated with autosomal recessive (AR) mutations, and three genes, PARK2, PARK7 and PINK1, have been found to carry mutations leading to AR disease. Since mutations in these genes account for less than 10% of EOPD patients, we hypothesized that further recessive genetic factors are involved in this disorder, which may appear in extended runs of homozygosity. We carried out genome wide SNP genotyping to look for extended runs of homozygosity (ROHs) in 1,445 EOPD cases and 6,987 controls. Logistic regression analyses showed an increased level of genomic homozygosity in EOPD cases compared to controls. These differences are larger for ROH of 9 Mb and above, where there is a more than three-fold increase in the proportion of cases carrying a ROH. These differences are not explained by occult recessive mutations at existing loci. Controlling for genome wide homozygosity in logistic regression analyses increased the differences between cases and controls, indicating that in EOPD cases ROHs do not simply relate to genome wide measures of inbreeding. Homozygosity at a locus on chromosome19p13.3 was identified as being more common in EOPD cases as compared to controls. Sequencing analysis of genes and predicted transcripts within this locus failed to identify a novel mutation causing EOPD in our cohort. There is an increased rate of genome wide homozygosity in EOPD, as measured by an increase in ROHs. These ROHs are a signature of inbreeding and do not necessarily harbour disease-causing genetic variants. Although there might be other regions of interest apart from chromosome 19p13.3, we lack the power to detect them with this analysis. PMID:22427796

  7. Inbreeding depression increases with environmental stress: an experimental study and meta-analysis.

    PubMed

    Fox, Charles W; Reed, David H

    2011-01-01

    Inbreeding-environment interactions occur when inbreeding leads to differential fitness loss in different environments. Inbred individuals are often more sensitive to environmental stress than are outbred individuals, presumably because stress increases the expression of deleterious recessive alleles or cellular safeguards against stress are pushed beyond the organism's physiological limits. We examined inbreeding-environment interactions, along two environmental axes (temperature and rearing host) that differ in the amount of developmental stress they impose, in the seed-feeding beetle Callosobruchus maculatus. We found that inbreeding depression (inbreeding load, L) increased with the stressfulness of the environment, with the magnitude of stress explaining as much as 66% of the variation in inbreed