Science.gov

Sample records for analysis increases accuracy

  1. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  2. Reporting Data with "Over-the-Counter" Data Analysis Supports Increases Educators' Analysis Accuracy

    ERIC Educational Resources Information Center

    Rankin, Jenny Grant

    2013-01-01

    There is extensive research on the benefits of making data-informed decisions to improve learning, but these benefits rely on the data being effectively interpreted. Despite educators' above-average intellect and education levels, there is evidence many educators routinely misinterpret student data. Data analysis problems persist even at…

  3. Joint Analysis of Psychiatric Disorders Increases Accuracy of Risk Prediction for Schizophrenia, Bipolar Disorder, and Major Depressive Disorder

    PubMed Central

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Absher, Devin; Agartz, Ingrid; Akil, Huda; Amin, Farooq; Andreassen, Ole A.; Anjorin, Adebayo; Anney, Richard; Arking, Dan E.; Asherson, Philip; Azevedo, Maria H.; Backlund, Lena; Badner, Judith A.; Bailey, Anthony J.; Banaschewski, Tobias; Barchas, Jack D.; Barnes, Michael R.; Barrett, Thomas B.; Bass, Nicholas; Battaglia, Agatino; Bauer, Michael; Bayés, Mònica; Bellivier, Frank; Bergen, Sarah E.; Berrettini, Wade; Betancur, Catalina; Bettecken, Thomas; Biederman, Joseph; Binder, Elisabeth B.; Black, Donald W.; Blackwood, Douglas H.R.; Bloss, Cinnamon S.; Boehnke, Michael; Boomsma, Dorret I.; Breen, Gerome; Breuer, René; Bruggeman, Richard; Buccola, Nancy G.; Buitelaar, Jan K.; Bunney, William E.; Buxbaum, Joseph D.; Byerley, William F.; Caesar, Sian; Cahn, Wiepke; Cantor, Rita M.; Casas, Miguel; Chakravarti, Aravinda; Chambert, Kimberly; Choudhury, Khalid; Cichon, Sven; Cloninger, C. Robert; Collier, David A.; Cook, Edwin H.; Coon, Hilary; Cormand, Bru; Cormican, Paul; Corvin, Aiden; Coryell, William H.; Craddock, Nicholas; Craig, David W.; Craig, Ian W.; Crosbie, Jennifer; Cuccaro, Michael L.; Curtis, David; Czamara, Darina; Daly, Mark J.; Datta, Susmita; Dawson, Geraldine; Day, Richard; De Geus, Eco J.; Degenhardt, Franziska; Devlin, Bernie; Djurovic, Srdjan; Donohoe, Gary J.; Doyle, Alysa E.; Duan, Jubao; Dudbridge, Frank; Duketis, Eftichia; Ebstein, Richard P.; Edenberg, Howard J.; Elia, Josephine; Ennis, Sean; Etain, Bruno; Fanous, Ayman; Faraone, Stephen V.; Farmer, Anne E.; Ferrier, I. Nicol; Flickinger, Matthew; Fombonne, Eric; Foroud, Tatiana; Frank, Josef; Franke, Barbara; Fraser, Christine; Freedman, Robert; Freimer, Nelson B.; Freitag, Christine M.; Friedl, Marion; Frisén, Louise; Gallagher, Louise; Gejman, Pablo V.; Georgieva, Lyudmila; Gershon, Elliot S.; Geschwind, Daniel H.; Giegling, Ina; Gill, Michael; Gordon, Scott D.; Gordon-Smith, Katherine; Green, Elaine K.; Greenwood, Tiffany A.; Grice, Dorothy E.; Gross, Magdalena; Grozeva, Detelina; Guan, Weihua; Gurling, Hugh; De Haan, Lieuwe; Haines, Jonathan L.; Hakonarson, Hakon; Hallmayer, Joachim; Hamilton, Steven P.; Hamshere, Marian L.; Hansen, Thomas F.; Hartmann, Annette M.; Hautzinger, Martin; Heath, Andrew C.; Henders, Anjali K.; Herms, Stefan; Hickie, Ian B.; Hipolito, Maria; Hoefels, Susanne; Holmans, Peter A.; Holsboer, Florian; Hoogendijk, Witte J.; Hottenga, Jouke-Jan; Hultman, Christina M.; Hus, Vanessa; Ingason, Andrés; Ising, Marcus; Jamain, Stéphane; Jones, Ian; Jones, Lisa; Kähler, Anna K.; Kahn, René S.; Kandaswamy, Radhika; Keller, Matthew C.; Kelsoe, John R.; Kendler, Kenneth S.; Kennedy, James L.; Kenny, Elaine; Kent, Lindsey; Kim, Yunjung; Kirov, George K.; Klauck, Sabine M.; Klei, Lambertus; Knowles, James A.; Kohli, Martin A.; Koller, Daniel L.; Konte, Bettina; Korszun, Ania; Krabbendam, Lydia; Krasucki, Robert; Kuntsi, Jonna; Kwan, Phoenix; Landén, Mikael; Långström, Niklas; Lathrop, Mark; Lawrence, Jacob; Lawson, William B.; Leboyer, Marion; Ledbetter, David H.; Lee, Phil H.; Lencz, Todd; Lesch, Klaus-Peter; Levinson, Douglas F.; Lewis, Cathryn M.; Li, Jun; Lichtenstein, Paul; Lieberman, Jeffrey A.; Lin, Dan-Yu; Linszen, Don H.; Liu, Chunyu; Lohoff, Falk W.; Loo, Sandra K.; Lord, Catherine; Lowe, Jennifer K.; Lucae, Susanne; MacIntyre, Donald J.; Madden, Pamela A.F.; Maestrini, Elena; Magnusson, Patrik K.E.; Mahon, Pamela B.; Maier, Wolfgang; Malhotra, Anil K.; Mane, Shrikant M.; Martin, Christa L.; Martin, Nicholas G.; Mattheisen, Manuel; Matthews, Keith; Mattingsdal, Morten; McCarroll, Steven A.; McGhee, Kevin A.; McGough, James J.; McGrath, Patrick J.; McGuffin, Peter; McInnis, Melvin G.; McIntosh, Andrew; McKinney, Rebecca; McLean, Alan W.; McMahon, Francis J.; McMahon, William M.; McQuillin, Andrew; Medeiros, Helena; Medland, Sarah E.; Meier, Sandra; Melle, Ingrid; Meng, Fan; Meyer, Jobst; Middeldorp, Christel M.; Middleton, Lefkos; Milanova, Vihra; Miranda, Ana; Monaco, Anthony P.; Montgomery, Grant W.; Moran, Jennifer L.; Moreno-De-Luca, Daniel; Morken, Gunnar; Morris, Derek W.; Morrow, Eric M.; Moskvina, Valentina; Mowry, Bryan J.; Muglia, Pierandrea; Mühleisen, Thomas W.; Müller-Myhsok, Bertram; Murtha, Michael; Myers, Richard M.; Myin-Germeys, Inez; Neale, Benjamin M.; Nelson, Stan F.; Nievergelt, Caroline M.; Nikolov, Ivan; Nimgaonkar, Vishwajit; Nolen, Willem A.; Nöthen, Markus M.; Nurnberger, John I.; Nwulia, Evaristus A.; Nyholt, Dale R.; O’Donovan, Michael C.; O’Dushlaine, Colm; Oades, Robert D.; Olincy, Ann; Oliveira, Guiomar; Olsen, Line; Ophoff, Roel A.; Osby, Urban; Owen, Michael J.; Palotie, Aarno; Parr, Jeremy R.; Paterson, Andrew D.; Pato, Carlos N.; Pato, Michele T.; Penninx, Brenda W.; Pergadia, Michele L.; Pericak-Vance, Margaret A.; Perlis, Roy H.; Pickard, Benjamin S.; Pimm, Jonathan; Piven, Joseph; Posthuma, Danielle; Potash, James B.; Poustka, Fritz; Propping, Peter; Purcell, Shaun M.; Puri, Vinay; Quested, Digby J.; Quinn, Emma M.; Ramos-Quiroga, Josep Antoni; Rasmussen, Henrik B.; Raychaudhuri, Soumya; Rehnström, Karola; Reif, Andreas; Ribasés, Marta; Rice, John P.; Rietschel, Marcella; Ripke, Stephan; Roeder, Kathryn; Roeyers, Herbert; Rossin, Lizzy; Rothenberger, Aribert; Rouleau, Guy; Ruderfer, Douglas; Rujescu, Dan; Sanders, Alan R.; Sanders, Stephan J.; Santangelo, Susan L.; Schachar, Russell; Schalling, Martin; Schatzberg, Alan F.; Scheftner, William A.; Schellenberg, Gerard D.; Scherer, Stephen W.; Schork, Nicholas J.; Schulze, Thomas G.; Schumacher, Johannes; Schwarz, Markus; Scolnick, Edward; Scott, Laura J.; Sergeant, Joseph A.; Shi, Jianxin; Shilling, Paul D.; Shyn, Stanley I.; Silverman, Jeremy M.; Sklar, Pamela; Slager, Susan L.; Smalley, Susan L.; Smit, Johannes H.; Smith, Erin N.; Smoller, Jordan W.; Sonuga-Barke, Edmund J.S.; St Clair, David; State, Matthew; Steffens, Michael; Steinhausen, Hans-Christoph; Strauss, John S.; Strohmaier, Jana; Stroup, T. Scott; Sullivan, Patrick F.; Sutcliffe, James; Szatmari, Peter; Szelinger, Szabocls; Thapar, Anita; Thirumalai, Srinivasa; Thompson, Robert C.; Todorov, Alexandre A.; Tozzi, Federica; Treutlein, Jens; Tzeng, Jung-Ying; Uhr, Manfred; van den Oord, Edwin J.C.G.; Van Grootheest, Gerard; Van Os, Jim; Vicente, Astrid M.; Vieland, Veronica J.; Vincent, John B.; Visscher, Peter M.; Walsh, Christopher A.; Wassink, Thomas H.; Watson, Stanley J.; Weiss, Lauren A.; Weissman, Myrna M.; Werge, Thomas; Wienker, Thomas F.; Wiersma, Durk; Wijsman, Ellen M.; Willemsen, Gonneke; Williams, Nigel; Willsey, A. Jeremy; Witt, Stephanie H.; Wray, Naomi R.; Xu, Wei; Young, Allan H.; Yu, Timothy W.; Zammit, Stanley; Zandi, Peter P.; Zhang, Peng; Zitman, Frans G.; Zöllner, Sebastian; Coryell, William; Potash, James B.; Scheftner, William A.; Shi, Jianxin; Weissman, Myrna M.; Hultman, Christina M.; Landén, Mikael; Levinson, Douglas F.; Kendler, Kenneth S.; Smoller, Jordan W.; Wray, Naomi R.; Lee, S. Hong

    2015-01-01

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk. PMID:25640677

  4. Joint analysis of psychiatric disorders increases accuracy of risk prediction for schizophrenia, bipolar disorder, and major depressive disorder.

    PubMed

    Maier, Robert; Moser, Gerhard; Chen, Guo-Bo; Ripke, Stephan; Coryell, William; Potash, James B; Scheftner, William A; Shi, Jianxin; Weissman, Myrna M; Hultman, Christina M; Landén, Mikael; Levinson, Douglas F; Kendler, Kenneth S; Smoller, Jordan W; Wray, Naomi R; Lee, S Hong

    2015-02-01

    Genetic risk prediction has several potential applications in medical research and clinical practice and could be used, for example, to stratify a heterogeneous population of patients by their predicted genetic risk. However, for polygenic traits, such as psychiatric disorders, the accuracy of risk prediction is low. Here we use a multivariate linear mixed model and apply multi-trait genomic best linear unbiased prediction for genetic risk prediction. This method exploits correlations between disorders and simultaneously evaluates individual risk for each disorder. We show that the multivariate approach significantly increases the prediction accuracy for schizophrenia, bipolar disorder, and major depressive disorder in the discovery as well as in independent validation datasets. By grouping SNPs based on genome annotation and fitting multiple random effects, we show that the prediction accuracy could be further improved. The gain in prediction accuracy of the multivariate approach is equivalent to an increase in sample size of 34% for schizophrenia, 68% for bipolar disorder, and 76% for major depressive disorders using single trait models. Because our approach can be readily applied to any number of GWAS datasets of correlated traits, it is a flexible and powerful tool to maximize prediction accuracy. With current sample size, risk predictors are not useful in a clinical setting but already are a valuable research tool, for example in experimental designs comparing cases with high and low polygenic risk. PMID:25640677

  5. Analysis of spatial variability of near-surface soil moisture to increase rainfall-runoff modelling accuracy in SW Hungary

    NASA Astrophysics Data System (ADS)

    Hegedüs, P.; Czigány, S.; Pirkhoffer, E.; Balatonyi, L.; Hickey, R.

    2015-04-01

    Between September 5, 2008 and September 5, 2009, near-surface soil moisture time series were collected in the northern part of a 1.7 km2 watershed in SWHungary at 14 monitoring locations using a portable TDR-300 soil moisture sensor. The objectives of this study are to increase the accuracy of soil moisture measurement at watershed scale, to improve flood forecasting accuracy, and to optimize soil moisture sensor density. According to our results, in 10 of 13 cases, a strong correlation exists between the measured soil moisture data of Station 5 and all other monitoring stations; Station 5 is considered representative for the entire watershed. Logically, the selection of the location of the representative measurement point(s) is essential for obtaining representative and accurate soil moisture values for the given watershed. This could be done by (i) employing monitoring stations of higher number at the exploratory phase of the monitoring, (ii) mapping soil physical properties at watershed scale, and (iii) running cross-relational statistical analyses on the obtained data. Our findings indicate that increasing the number of soil moisture data points available for interpolation increases the accuracy of watershed-scale soil moisture estimation. The data set used for interpolation (and estimation of mean antecedent soil moisture values) could be improved (thus, having a higher number of data points) by selecting points of similar properties to the measurement points from the DEM and soil databases. By using a higher number of data points for interpolation, both interpolation accuracy and spatial resolution have increased for the measured soil moisture values for the Pósa Valley.

  6. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  7. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  8. Do saccharide doped PAGAT dosimeters increase accuracy?

    NASA Astrophysics Data System (ADS)

    Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.

    2015-01-01

    To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.

  9. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  10. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  11. Diagnostic accuracy of deep vein thrombosis is increased by analysis using combined optimal cut-off values of postoperative plasma D-dimer levels

    PubMed Central

    JIANG, YONG; LI, JIE; LIU, YANG; ZHANG, WEIGUO

    2016-01-01

    The present study aimed to evaluate the accuracy of analysis using optimal cut-off values of plasma D-dimer levels in the diagnosis of deep vein thrombosis (DVT). A total of 175 orthopedic patients with DVT and 162 patients without DVT were included in the study. Ultrasonic color Doppler imaging was performed on lower limb veins prior to and following orthopedic surgery in order to determine the types of orthopedic conditions that were present. An enzyme-linked fluorescent assay was performed to detect the expression levels of D-dimer in plasma, and receiver operating characteristic analysis was performed to predict the occurrence of DVT on the basis of the expression levels of D-dimer. After surgery, the expression levels of D-dimer in the plasma of DVT patients were significantly higher in comparison with those in orthopedic patients without DVT (P<0.05). When the patients were divided into subgroups according to the underlying orthopedic condition, the expression levels of D-dimer in the plasma of each subgroup were higher 1 day after orthopedic surgery in comparison to those prior to surgery (P<0.05). The diagnostic accuracy achieved using combined optimal cut-off values at 1 and 3 days post-surgery was significantly higher than the accuracy when using a single optimal cut-off value (P<0.05). In conclusion, detection of D-dimer expression levels at 1 day post-orthopedic surgery may be important in predicting DVT. In addition, the diagnostic accuracy of DVT is significantly increased by analysis using combined optimal cut-off values of D-dimer plasma expression levels. PMID:27168793

  12. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  13. Hydraulic servo system increases accuracy in fatigue testing

    NASA Technical Reports Server (NTRS)

    Dixon, G. V.; Kibler, K. S.

    1967-01-01

    Hydraulic servo system increases accuracy in applying fatigue loading to a specimen under test. An error sensing electronic control loop, coupled to the hydraulic proportional closed loop cyclic force generator, provides an accurately controlled peak force to the specimen.

  14. Portable, high intensity isotopic neutron source provides increased experimental accuracy

    NASA Technical Reports Server (NTRS)

    Mohr, W. C.; Stewart, D. C.; Wahlgren, M. A.

    1968-01-01

    Small portable, high intensity isotopic neutron source combines twelve curium-americium beryllium sources. This high intensity of neutrons, with a flux which slowly decreases at a known rate, provides for increased experimental accuracy.

  15. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  16. Simultaneous analysis of multiple enzymes increases accuracy of pulsed-field gel electrophoresis in assigning genetic relationships among homogeneous Salmonella strains.

    PubMed

    Zheng, Jie; Keys, Christine E; Zhao, Shaohua; Ahmed, Rafiq; Meng, Jianghong; Brown, Eric W

    2011-01-01

    Due to a highly homogeneous genetic composition, the subtyping of Salmonella enterica serovar Enteritidis strains to an epidemiologically relevant level remains intangible for pulsed-field gel electrophoresis (PFGE). We reported previously on a highly discriminatory PFGE-based subtyping scheme for S. enterica serovar Enteritidis that relies on a single combined cluster analysis of multiple restriction enzymes. However, the ability of a subtyping method to correctly infer genetic relatedness among outbreak strains is also essential for effective molecular epidemiological traceback. In this study, genetic and phylogenetic analyses were performed to assess whether concatenated enzyme methods can cluster closely related salmonellae into epidemiologically relevant hierarchies. PFGE profiles were generated by use of six restriction enzymes (XbaI, BlnI, SpeI, SfiI, PacI, and NotI) for 74 strains each of S. enterica serovar Enteritidis and S. enterica serovar Typhimurium. Correlation analysis of Dice similarity coefficients for all pairwise strain comparisons underscored the importance of combining multiple enzymes for the accurate assignment of genetic relatedness among Salmonella strains. The mean correlation increased from 81% and 41% for single-enzyme PFGE up to 99% and 96% for five-enzyme combined PFGE for S. enterica serovar Enteritidis and S. enterica serovar Typhimurium strains, respectively. Data regressions approached 100% correlation among Dice similarities for S. enterica serovar Enteritidis and S. enterica serovar Typhimurium strains when a minimum of six enzymes were concatenated. Phylogenetic congruence measures singled out XbaI, BlnI, SfiI, and PacI as most concordant for S. enterica serovar Enteritidis, while XbaI, BlnI, and SpeI were most concordant among S. enterica serovar Typhimurium strains. Together, these data indicate that PFGE coupled with sufficient enzyme numbers and combinations is capable of discerning accurate genetic relationships among

  17. Accuracy analysis of automatic distortion correction

    NASA Astrophysics Data System (ADS)

    Kolecki, Jakub; Rzonca, Antoni

    2015-06-01

    The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested.

  18. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores

    PubMed Central

    Vilhjálmsson, Bjarni J.; Yang, Jian; Finucane, Hilary K.; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Ripke, Stephan; Neale, Benjamin M.; Corvin, Aiden; Walters, James T.R.; Farh, Kai-How; Holmans, Peter A.; Lee, Phil; Bulik-Sullivan, Brendan; Collier, David A.; Huang, Hailiang; Pers, Tune H.; Agartz, Ingrid; Agerbo, Esben; Albus, Margot; Alexander, Madeline; Amin, Farooq; Bacanu, Silviu A.; Begemann, Martin; Belliveau, Richard A.; Bene, Judit; Bergen, Sarah E.; Bevilacqua, Elizabeth; Bigdeli, Tim B.; Black, Donald W.; Bruggeman, Richard; Buccola, Nancy G.; Buckner, Randy L.; Byerley, William; Cahn, Wiepke; Cai, Guiqing; Campion, Dominique; Cantor, Rita M.; Carr, Vaughan J.; Carrera, Noa; Catts, Stanley V.; Chambert, Kimberly D.; Chan, Raymond C.K.; Chen, Ronald Y.L.; Chen, Eric Y.H.; Cheng, Wei; Cheung, Eric F.C.; Chong, Siow Ann; Cloninger, C. Robert; Cohen, David; Cohen, Nadine; Cormican, Paul; Craddock, Nick; Crowley, James J.; Curtis, David; Davidson, Michael; Davis, Kenneth L.; Degenhardt, Franziska; Del Favero, Jurgen; DeLisi, Lynn E.; Demontis, Ditte; Dikeos, Dimitris; Dinan, Timothy; Djurovic, Srdjan; Donohoe, Gary; Drapeau, Elodie; Duan, Jubao; Dudbridge, Frank; Durmishi, Naser; Eichhammer, Peter; Eriksson, Johan; Escott-Price, Valentina; Essioux, Laurent; Fanous, Ayman H.; Farrell, Martilias S.; Frank, Josef; Franke, Lude; Freedman, Robert; Freimer, Nelson B.; Friedl, Marion; Friedman, Joseph I.; Fromer, Menachem; Genovese, Giulio; Georgieva, Lyudmila; Gershon, Elliot S.; Giegling, Ina; Giusti-Rodrguez, Paola; Godard, Stephanie; Goldstein, Jacqueline I.; Golimbet, Vera; Gopal, Srihari; Gratten, Jacob; Grove, Jakob; de Haan, Lieuwe; Hammer, Christian; Hamshere, Marian L.; Hansen, Mark; Hansen, Thomas; Haroutunian, Vahram; Hartmann, Annette M.; Henskens, Frans A.; Herms, Stefan; Hirschhorn, Joel N.; Hoffmann, Per; Hofman, Andrea; Hollegaard, Mads V.; Hougaard, David M.; Ikeda, Masashi; Joa, Inge; Julia, Antonio; Kahn, Rene S.; Kalaydjieva, Luba; Karachanak-Yankova, Sena; Karjalainen, Juha; Kavanagh, David; Keller, Matthew C.; Kelly, Brian J.; Kennedy, James L.; Khrunin, Andrey; Kim, Yunjung; Klovins, Janis; Knowles, James A.; Konte, Bettina; Kucinskas, Vaidutis; Kucinskiene, Zita Ausrele; Kuzelova-Ptackova, Hana; Kahler, Anna K.; Laurent, Claudine; Keong, Jimmy Lee Chee; Lee, S. Hong; Legge, Sophie E.; Lerer, Bernard; Li, Miaoxin; Li, Tao; Liang, Kung-Yee; Lieberman, Jeffrey; Limborska, Svetlana; Loughland, Carmel M.; Lubinski, Jan; Lnnqvist, Jouko; Macek, Milan; Magnusson, Patrik K.E.; Maher, Brion S.; Maier, Wolfgang; Mallet, Jacques; Marsal, Sara; Mattheisen, Manuel; Mattingsdal, Morten; McCarley, Robert W.; McDonald, Colm; McIntosh, Andrew M.; Meier, Sandra; Meijer, Carin J.; Melegh, Bela; Melle, Ingrid; Mesholam-Gately, Raquelle I.; Metspalu, Andres; Michie, Patricia T.; Milani, Lili; Milanova, Vihra; Mokrab, Younes; Morris, Derek W.; Mors, Ole; Mortensen, Preben B.; Murphy, Kieran C.; Murray, Robin M.; Myin-Germeys, Inez; Mller-Myhsok, Bertram; Nelis, Mari; Nenadic, Igor; Nertney, Deborah A.; Nestadt, Gerald; Nicodemus, Kristin K.; Nikitina-Zake, Liene; Nisenbaum, Laura; Nordin, Annelie; O’Callaghan, Eadbhard; O’Dushlaine, Colm; O’Neill, F. Anthony; Oh, Sang-Yun; Olincy, Ann; Olsen, Line; Van Os, Jim; Pantelis, Christos; Papadimitriou, George N.; Papiol, Sergi; Parkhomenko, Elena; Pato, Michele T.; Paunio, Tiina; Pejovic-Milovancevic, Milica; Perkins, Diana O.; Pietilinen, Olli; Pimm, Jonathan; Pocklington, Andrew J.; Powell, John; Price, Alkes; Pulver, Ann E.; Purcell, Shaun M.; Quested, Digby; Rasmussen, Henrik B.; Reichenberg, Abraham; Reimers, Mark A.; Richards, Alexander L.; Roffman, Joshua L.; Roussos, Panos; Ruderfer, Douglas M.; Salomaa, Veikko; Sanders, Alan R.; Schall, Ulrich; Schubert, Christian R.; Schulze, Thomas G.; Schwab, Sibylle G.; Scolnick, Edward M.; Scott, Rodney J.; Seidman, Larry J.; Shi, Jianxin; Sigurdsson, Engilbert; Silagadze, Teimuraz; Silverman, Jeremy M.; Sim, Kang; Slominsky, Petr; Smoller, Jordan W.; So, Hon-Cheong; Spencer, Chris C.A.; Stahl, Eli A.; Stefansson, Hreinn; Steinberg, Stacy; Stogmann, Elisabeth; Straub, Richard E.; Strengman, Eric; Strohmaier, Jana; Stroup, T. Scott; Subramaniam, Mythily; Suvisaari, Jaana; Svrakic, Dragan M.; Szatkiewicz, Jin P.; Sderman, Erik; Thirumalai, Srinivas; Toncheva, Draga; Tooney, Paul A.; Tosato, Sarah; Veijola, Juha; Waddington, John; Walsh, Dermot; Wang, Dai; Wang, Qiang; Webb, Bradley T.; Weiser, Mark; Wildenauer, Dieter B.; Williams, Nigel M.; Williams, Stephanie; Witt, Stephanie H.; Wolen, Aaron R.; Wong, Emily H.M.; Wormley, Brandon K.; Wu, Jing Qin; Xi, Hualin Simon; Zai, Clement C.; Zheng, Xuebin; Zimprich, Fritz; Wray, Naomi R.; Stefansson, Kari; Visscher, Peter M.; Adolfsson, Rolf; Andreassen, Ole A.; Blackwood, Douglas H.R.; Bramon, Elvira; Buxbaum, Joseph D.; Børglum, Anders D.; Cichon, Sven; Darvasi, Ariel; Domenici, Enrico; Ehrenreich, Hannelore; Esko, Tonu; Gejman, Pablo V.; Gill, Michael; Gurling, Hugh; Hultman, Christina M.; Iwata, Nakao; Jablensky, Assen V.; Jonsson, Erik G.; Kendler, Kenneth S.; Kirov, George; Knight, Jo; Lencz, Todd; Levinson, Douglas F.; Li, Qingqin S.; Liu, Jianjun; Malhotra, Anil K.; McCarroll, Steven A.; McQuillin, Andrew; Moran, Jennifer L.; Mortensen, Preben B.; Mowry, Bryan J.; Nthen, Markus M.; Ophoff, Roel A.; Owen, Michael J.; Palotie, Aarno; Pato, Carlos N.; Petryshen, Tracey L.; Posthuma, Danielle; Rietschel, Marcella; Riley, Brien P.; Rujescu, Dan; Sham, Pak C.; Sklar, Pamela; St. Clair, David; Weinberger, Daniel R.; Wendland, Jens R.; Werge, Thomas; Daly, Mark J.; Sullivan, Patrick F.; O’Donovan, Michael C.; Kraft, Peter; Hunter, David J.; Adank, Muriel; Ahsan, Habibul; Aittomäki, Kristiina; Baglietto, Laura; Berndt, Sonja; Blomquist, Carl; Canzian, Federico; Chang-Claude, Jenny; Chanock, Stephen J.; Crisponi, Laura; Czene, Kamila; Dahmen, Norbert; Silva, Isabel dos Santos; Easton, Douglas; Eliassen, A. Heather; Figueroa, Jonine; Fletcher, Olivia; Garcia-Closas, Montserrat; Gaudet, Mia M.; Gibson, Lorna; Haiman, Christopher A.; Hall, Per; Hazra, Aditi; Hein, Rebecca; Henderson, Brian E.; Hofman, Albert; Hopper, John L.; Irwanto, Astrid; Johansson, Mattias; Kaaks, Rudolf; Kibriya, Muhammad G.; Lichtner, Peter; Lindström, Sara; Liu, Jianjun; Lund, Eiliv; Makalic, Enes; Meindl, Alfons; Meijers-Heijboer, Hanne; Müller-Myhsok, Bertram; Muranen, Taru A.; Nevanlinna, Heli; Peeters, Petra H.; Peto, Julian; Prentice, Ross L.; Rahman, Nazneen; Sánchez, María José; Schmidt, Daniel F.; Schmutzler, Rita K.; Southey, Melissa C.; Tamimi, Rulla; Travis, Ruth; Turnbull, Clare; Uitterlinden, Andre G.; van der Luijt, Rob B.; Waisfisz, Quinten; Wang, Zhaoming; Whittemore, Alice S.; Yang, Rose; Zheng, Wei; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E.; Schierup, Mikkel H.; De Jager, Philip; Patsopoulos, Nikolaos A.; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M.; Kraft, Peter; Patterson, Nick; Price, Alkes L.

    2015-01-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R2 increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase. PMID:26430803

  19. Modeling Linkage Disequilibrium Increases Accuracy of Polygenic Risk Scores.

    PubMed

    Vilhjálmsson, Bjarni J; Yang, Jian; Finucane, Hilary K; Gusev, Alexander; Lindström, Sara; Ripke, Stephan; Genovese, Giulio; Loh, Po-Ru; Bhatia, Gaurav; Do, Ron; Hayeck, Tristan; Won, Hong-Hee; Kathiresan, Sekar; Pato, Michele; Pato, Carlos; Tamimi, Rulla; Stahl, Eli; Zaitlen, Noah; Pasaniuc, Bogdan; Belbin, Gillian; Kenny, Eimear E; Schierup, Mikkel H; De Jager, Philip; Patsopoulos, Nikolaos A; McCarroll, Steve; Daly, Mark; Purcell, Shaun; Chasman, Daniel; Neale, Benjamin; Goddard, Michael; Visscher, Peter M; Kraft, Peter; Patterson, Nick; Price, Alkes L

    2015-10-01

    Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R(2) increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase. PMID:26430803

  20. Using Transponders on the Moon to Increase Accuracy of GPS

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Chui, Talso

    2008-01-01

    It has been proposed to place laser or radio transponders at suitably chosen locations on the Moon to increase the accuracy achievable using the Global Positioning System (GPS) or other satellite-based positioning system. The accuracy of GPS position measurements depends on the accuracy of determination of the ephemerides of the GPS satellites. These ephemerides are determined by means of ranging to and from Earth-based stations and consistency checks among the satellites. Unfortunately, ranging to and from Earth is subject to errors caused by atmospheric effects, notably including unpredictable variations in refraction. The proposal is based on exploitation of the fact that ranging between a GPS satellite and another object outside the atmosphere is not subject to error-inducing atmospheric effects. The Moon is such an object and is a convenient place for a ranging station. The ephemeris of the Moon is well known and, unlike a GPS satellite, the Moon is massive enough that its orbit is not measurably affected by the solar wind and solar radiation. According to the proposal, each GPS satellite would repeatedly send a short laser or radio pulse toward the Moon and the transponder(s) would respond by sending back a pulse and delay information. The GPS satellite could then compute its distance from the known position(s) of the transponder(s) on the Moon. Because the same hemisphere of the Moon faces the Earth continuously, any transponders placed there would remain continuously or nearly continuously accessible to GPS satellites, and so only a relatively small number of transponders would be needed to provide continuous coverage. Assuming that the transponders would depend on solar power, it would be desirable to use at least two transponders, placed at diametrically opposite points on the edges of the Moon disk as seen from Earth, so that all or most of the time, at least one of them would be in sunlight.

  1. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  2. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  3. Accuracy considerations in the computational analysis of jet noise

    NASA Technical Reports Server (NTRS)

    Scott, James N.

    1993-01-01

    The application of computational fluid dynamics methods to the analysis of problems in aerodynamic noise has resulted in the extension and adaptation of conventional CFD to the discipline now referred to as computational aeroacoustics (CAA). In the analysis of jet noise accurate resolution of a wide range of spatial and temporal scales in the flow field is essential if the acoustic far field is to be predicted. The numerical simulation of unsteady jet flow has been successfully demonstrated and many flow features have been computed with reasonable accuracy. Grid refinement and increased solution time are discussed as means of improving accuracy of Navier-Stokes solutions of unsteady jet flow. In addition various properties of different numerical procedures which influence accuracy are examined with particular emphasis on dispersion and dissipation characteristics. These properties are investigated by using selected schemes to solve model problems for the propagation of a shock wave and a sinusoidal disturbance. The results are compared for the different schemes.

  4. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  5. Holter triage ambulatory ECG analysis. Accuracy and time efficiency.

    PubMed

    Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K

    1996-01-01

    Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time

  6. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Xu, Lu; Yang, Cheng-Hua; Wang, Qiang; Liu, Yue-Hao; Zhao, Yuan

    2015-12-01

    The range accuracy of three-dimensional (3D) ghost imaging is derived. Based on the derived range accuracy equation, the relationship between the slicing number and the range accuracy is analyzed and an optimum slicing number (OSN) is determined. According to the OSN, an improved 3D ghost imaging algorithm is proposed to increase the range accuracy. Experimental results indicate that the slicing number can affect the range accuracy significantly and the highest range accuracy can be achieved if the 3D ghost imaging system works with OSN. Project supported by the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 61108072).

  7. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  8. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction. PMID:22047326

  9. Dust trajectory sensor: Accuracy and data analysis

    NASA Astrophysics Data System (ADS)

    Xie, J.; Sternovsky, Z.; Grün, E.; Auer, S.; Duncan, N.; Drake, K.; Le, H.; Horanyi, M.; Srama, R.

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008), 10.1063/1.2960566] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010), 10.1016/j.nima.2010.06.091]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  10. Increasing Assignment Completion and Accuracy Using a Daily Report Card Procedure.

    ERIC Educational Resources Information Center

    Drew, Barry M.; And Others

    1982-01-01

    Examined the effects of daily report cards designed to increase the completion and accuracy of in-class assignments in two youngsters described as having a behavioral history of difficulty in completing seat work. Use of the procedure produced immediate significant changes in rates of both completion and accuracy. (Author)

  11. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    NASA Astrophysics Data System (ADS)

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-05-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging.

  12. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis.

    PubMed

    Litjens, Geert; Sánchez, Clara I; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen-van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-01-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce 'deep learning' as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30-40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that 'deep learning' holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078

  13. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  14. Accuracy analysis of optical ranging in atmosphere

    NASA Astrophysics Data System (ADS)

    Yuan, Hong-wu; Huang, Yin-bo; Mei, Hai-ping; Rao, Rui-zhong

    2009-07-01

    Optical ranging is one of the most precise techniques for distance measurement. The effects of the density variation of atmosphere, aerosols and clouds on optical ranging precision are generally considered, a new method is proposed for calculating the ranging precision in the presence of aerosol particles and clouds. The size distribution spectrum models for aerosols and clouds in the Optical Properties of Aerosols and Clouds Package (OPAC) are adopted. Results show that aerosols and clouds could introduce errors of several centimeters to several ten meters to the ranging. The relationship between the ranging precision and the relative humidity, the zenith angle of ranging direction and the optical wavelength is also analyzed. The ranging error doesn't have an obvious relationship with the wavelength, but depends on the zenith angle, especially for the angle larger than 70 degree. The ranging error depends on the relative humidity as well. The ranging error induced by aerosols increases gradually with the increase of the relative humidity when the relative humidity is less than 80%, but it increases rapidly when the relative humidity is larger than 80%. Our results could provide a theoretical basis and reference for the application of optical ranging.

  15. Accuracy Analysis of the PIC Method

    NASA Astrophysics Data System (ADS)

    Verboncoeur, J. P.; Cartwright, K. L.

    2000-10-01

    The discretization errors for many steps of the classical Particle-in-Cell (PIC) model have been well-studied (C. K. Birdsall and A. B. Langdon, Plasma Physics via Computer Simulation, McGraw-Hill, New York, NY (1985).) (R. W. Hockney and J. W. Eastwood, Computer Simulation Using Particles, McGraw-Hill, New York, NY (1981).). In this work, the errors in the interpolation algorithms, which provide the connection between continuum particles and discrete fields, are described in greater detail. In addition, the coupling of errors between steps in the method is derived. The analysis is carried out for both electrostatic and electromagnetic PIC models, and the results are demonstrated using a bounded one-dimensional electrostatic PIC code (J. P. Verboncoeur et al., J. Comput. Phys. 104, 321-328 (1993).), as well as a bounded two-dimensional electromagnetic PIC code (J. P. Verboncoeur et al., Comp. Phys. Comm. 87, 199-211 (1995).).

  16. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  17. Increasing the accuracy of measurements based on the solution of Pauli's quantum equation

    NASA Astrophysics Data System (ADS)

    Ermishin, Sergey; Korol, Alexandra

    2013-05-01

    There is a measurements principle that ensures the increase of accuracy of measurements based on redundant measurements. Main properties of the solution are: a discrete method with a surge of probability within the parent entity and comparison of the graph of the probability distribution for the diffraction grids with the graph of probability density function. Method based on the analog of Pauli equation solution. The method of electronic reference measurements with quantum computing applied to mathematical data processing allows to greatly increase the credibility and accuracy of measurements at low cost, which is confirmed by simulation.

  18. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis

    PubMed Central

    Litjens, Geert; Sánchez, Clara I.; Timofeeva, Nadya; Hermsen, Meyke; Nagtegaal, Iris; Kovacs, Iringo; Hulsbergen - van de Kaa, Christina; Bult, Peter; van Ginneken, Bram; van der Laak, Jeroen

    2016-01-01

    Pathologists face a substantial increase in workload and complexity of histopathologic cancer diagnosis due to the advent of personalized medicine. Therefore, diagnostic protocols have to focus equally on efficiency and accuracy. In this paper we introduce ‘deep learning’ as a technique to improve the objectivity and efficiency of histopathologic slide analysis. Through two examples, prostate cancer identification in biopsy specimens and breast cancer metastasis detection in sentinel lymph nodes, we show the potential of this new methodology to reduce the workload for pathologists, while at the same time increasing objectivity of diagnoses. We found that all slides containing prostate cancer and micro- and macro-metastases of breast cancer could be identified automatically while 30–40% of the slides containing benign and normal tissue could be excluded without the use of any additional immunohistochemical markers or human intervention. We conclude that ‘deep learning’ holds great promise to improve the efficacy of prostate cancer diagnosis and breast cancer staging. PMID:27212078

  19. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  20. Using Self-Monitoring to Increase Attending to Task and Academic Accuracy in Children with Autism

    ERIC Educational Resources Information Center

    Holifield, Cassandra; Goodman, Janet; Hazelkorn, Michael; Heflin, L. Juane

    2010-01-01

    This study was conducted to investigate the effectiveness of a self-monitoring procedure on increasing attending to task and academic accuracy in two elementary students with autism in their self-contained classroom. A multiple baseline across participants in two academic subject areas was used to assess the effectiveness of the intervention. Both…

  1. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  2. Range accuracy analysis of streak tube imaging lidar systems

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Chen, Zhaodong; Yuan, Wei; Chen, Deying; He, Ping

    2016-02-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high range accuracy and a wide range gate with the use of a pulsed laser transmitter and streak tube receiver to produce 3D range images. This work investigates the range accuracy performance of STIL systems based on a peak detection algorithm, taking into account the effects of blurring of the image. A theoretical model of the time-resolved signal distribution, including the static blurring width in addition to the laser pulse width, is presented, resulting in a modified range accuracy analysis. The model indicates that the static blurring width has a significant effect on the range accuracy, which is validated by both the simulation and experimental results. By using the optimal static blurring width, the range accuracies are enhanced in both indoor and outdoor experiments, with a stand-off distance of 10 m and 1700 m, respectively, and corresponding, best range errors of 0.06 m and 0.25 m were achieved in a daylight environment.

  3. Neutron electric dipole moment and possibilities of increasing accuracy of experiments

    NASA Astrophysics Data System (ADS)

    Serebrov, A. P.; Kolomenskiy, E. A.; Pirozhkov, A. N.; Krasnoshchekova, I. A.; Vasiliev, A. V.; Polyushkin, A. O.; Lasakov, M. S.; Murashkin, A. N.; Solovey, V. A.; Fomin, A. K.; Shoka, I. V.; Zherebtsov, O. M.; Aleksandrov, E. B.; Dmitriev, S. P.; Dovator, N. A.; Geltenbort, P.; Ivanov, S. N.; Zimmer, O.

    2016-01-01

    The paper reports the results of an experiment on searching for the neutron electric dipole moment (EDM), performed on the ILL reactor (Grenoble, France). The double-chamber magnetic resonance spectrometer (Petersburg Nuclear Physics Institute (PNPI)) with prolonged holding of ultra cold neutrons has been used. Sources of possible systematic errors are analyzed, and their influence on the measurement results is estimated. The ways and prospects of increasing accuracy of the experiment are discussed.

  4. Development of nonlinear weighted compact schemes with increasingly higher order accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Shuhai; Jiang, Shufen; Shu, Chi-Wang

    2008-07-01

    In this paper, we design a class of high order accurate nonlinear weighted compact schemes that are higher order extensions of the nonlinear weighted compact schemes proposed by Deng and Zhang [X. Deng, H. Zhang, Developing high-order weighted compact nonlinear schemes, J. Comput. Phys. 165 (2000) 22-44] and the weighted essentially non-oscillatory schemes of Jiang and Shu [G.-S. Jiang, C.-W. Shu, Efficient implementation of weighted ENO schemes, J. Comput. Phys. 126 (1996) 202-228] and Balsara and Shu [D.S. Balsara, C.-W. Shu, Monotonicity preserving weighted essentially non-oscillatory schemes with increasingly high order of accuracy, J. Comput. Phys. 160 (2000) 405-452]. These nonlinear weighted compact schemes are proposed based on the cell-centered compact scheme of Lele [S.K. Lele, Compact finite difference schemes with spectral-like resolution, J. Comput. Phys. 103 (1992) 16-42]. Instead of performing the nonlinear interpolation on the conservative variables as in Deng and Zhang (2000), we propose to directly interpolate the flux on its stencil. Using the Lax-Friedrichs flux splitting and characteristic-wise projection, the resulted interpolation formulae are similar to those of the regular WENO schemes. Hence, the detailed analysis and even many pieces of the code can be directly copied from those of the regular WENO schemes. Through systematic test and comparison with the regular WENO schemes, we observe that the nonlinear weighted compact schemes have the same ability to capture strong discontinuities, while the resolution of short waves is improved and numerical dissipation is reduced.

  5. Mesoscale modelling methodology based on nudging to increase accuracy in WRA

    NASA Astrophysics Data System (ADS)

    Mylonas Dirdiris, Markos; Barbouchi, Sami; Hermmann, Hugo

    2016-04-01

    The offshore wind energy has recently become a rapidly growing renewable energy resource worldwide, with several offshore wind projects in development in different planning stages. Despite of this, a better understanding of the atmospheric interaction within the marine atmospheric boundary layer (MABL) is needed in order to contribute to a better energy capture and cost-effectiveness. Light has been thrown in observational nudging as it has recently become an innovative method to increase the accuracy of wind flow modelling. This particular study focuses on the observational nudging capability of Weather Research and Forecasting (WRF) and ways the uncertainty of wind flow modelling in the wind resource assessment (WRA) can be reduced. Finally, an alternative way to calculate the model uncertainty is pinpointed. Approach WRF mesoscale model will be nudged with observations from FINO3 at three different heights. The model simulations with and without applying observational nudging will be verified against FINO1 measurement data at 100m. In order to evaluate the observational nudging capability of WRF two ways to derive the model uncertainty will be described: one global uncertainty and an uncertainty per wind speed bin derived using the recommended practice of the IEA in order to link the model uncertainty to a wind energy production uncertainty. This study assesses the observational data assimilation capability of WRF model within the same vertical gridded atmospheric column. The principal aim is to investigate whether having observations up to one height could improve the simulation at a higher vertical level. The study will use objective analysis implementing a Cress-man scheme interpolation to interpolate the observation in time and in sp ace (keeping the horizontal component constant) to the gridded analysis. Then the WRF model core will incorporate the interpolated variables to the "first guess" to develop a nudged simulation. Consequently, WRF with and without

  6. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  7. Increased Genomic Prediction Accuracy in Wheat Breeding Through Spatial Adjustment of Field Trial Data

    PubMed Central

    Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav

    2013-01-01

    In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033

  8. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  9. Analysis of the Ionospheric Corrections Accuracy of EGNOS System

    NASA Astrophysics Data System (ADS)

    Prats, X.; Orus, R.; Hernandez-Pajares, M.; Juan, M.; Sanz, J.

    2002-01-01

    Satellite Based Augmentation systems (SBAS) provide to Global Navigation Satellite Systems (GNSS) users with an extra set of information, in order to enhance accuracy and integrity levels of GNSS stand alone positioning. The ionosphere is one of the main error component in SBAS. Therefore, the analysis of system performances requires a calibration of the broadcast corrections. In this context, different test methods to analyze the performance of these corrections are presented. The first set of tests involves two of the ionospheric calculations that are applied to the Global Ionospheric Maps (GIM), computed by the IGS Associate Analysis Centers: a TEC TOPEX comparison test and the STEC variations test. The second family of tests provides two very accurate analysis based on large-baselines ambiguity resolution techniques giving accuracies of about 16cm of L1 and few millimeters of L1 in the STEC and double differenced STEC determinations, respectively. Those four analysis have been applied for the EGNOS System Test Bed (ESTB) signal, which is the European SBAS provider.

  10. Accuracy analysis of pointing control system of solar power station

    NASA Technical Reports Server (NTRS)

    Hung, J. C.; Peebles, P. Z., Jr.

    1978-01-01

    The first-phase effort concentrated on defining the minimum basic functions that the retrodirective array must perform, identifying circuits that are capable of satisfying the basic functions, and looking at some of the error sources in the system and how they affect accuracy. The initial effort also examined three methods for generating torques for mechanical antenna control, performed a rough analysis of the flexible body characteristics of the solar collector, and defined a control system configuration for mechanical pointing control of the array.

  11. The effectiveness of FE model for increasing accuracy in stretch forming simulation of aircraft skin panels

    NASA Astrophysics Data System (ADS)

    Kono, A.; Yamada, T.; Takahashi, S.

    2013-12-01

    In the aerospace industry, stretch forming has been used to form the outer surface parts of aircraft, which are called skin panels. Empirical methods have been used to correct the springback by measuring the formed panels. However, such methods are impractical and cost prohibitive. Therefore, there is a need to develop simulation technologies to predict the springback caused by stretch forming [1]. This paper reports the results of a study on the influences of the modeling conditions and parameters on the accuracy of an FE analysis simulating the stretch forming of aircraft skin panels. The effects of the mesh aspect ratio, convergence criteria, and integration points are investigated, and better simulation conditions and parameters are proposed.

  12. The Meta-Analysis of Clinical Judgment Project: Effects of Experience on Judgment Accuracy

    ERIC Educational Resources Information Center

    Spengler, Paul M.; White, Michael J.; Aegisdottir, Stefania; Maugherman, Alan S.; Anderson, Linda A.; Cook, Robert S.; Nichols, Cassandra N.; Lampropoulos, Georgios K.; Walker, Blain S.; Cohen, Genna R.; Rush, Jeffrey D.

    2009-01-01

    Clinical and educational experience is one of the most commonly studied variables in clinical judgment research. Contrary to clinicians' perceptions, clinical judgment researchers have generally concluded that accuracy does not improve with increased education, training, or clinical experience. In this meta-analysis, the authors synthesized…

  13. Design and analysis of a high-accuracy flexure hinge.

    PubMed

    Liu, Min; Zhang, Xianmin; Fatikow, Sergej

    2016-05-01

    This paper designs and analyzes a new kind of flexure hinge obtained by using a topology optimization approach, namely, a quasi-V-shaped flexure hinge (QVFH). Flexure hinges are formed by three segments: the left and right segments with convex shapes and the middle segment with straight line. According to the results of topology optimization, the curve equations of profiles of the flexure hinges are developed by numerical fitting. The in-plane dimensionless compliance equations of the flexure hinges are derived based on Castigliano's second theorem. The accuracy of rotation, which is denoted by the compliance of the center of rotation that deviates from the midpoint, is derived. The equations for evaluating the maximum stresses are also provided. These dimensionless equations are verified by finite element analysis and experimentation. The analytical results are within 8% uncertainty compared to the finite element analysis results and within 9% uncertainty compared to the experimental measurement data. Compared with the filleted V-shaped flexure hinge, the QVFH has a higher accuracy of rotation and better ability of preserving the center of rotation position but smaller compliance. PMID:27250469

  14. Analysis of deformable image registration accuracy using computational modeling.

    PubMed

    Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J

    2010-03-01

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  15. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  16. Nonparametric meta-analysis for diagnostic accuracy studies.

    PubMed

    Zapf, Antonia; Hoyer, Annika; Kramer, Katharina; Kuss, Oliver

    2015-12-20

    Summarizing the information of many studies using a meta-analysis becomes more and more important, also in the field of diagnostic studies. The special challenge in meta-analysis of diagnostic accuracy studies is that in general sensitivity and specificity are co-primary endpoints. Across the studies both endpoints are correlated, and this correlation has to be considered in the analysis. The standard approach for such a meta-analysis is the bivariate logistic random effects model. An alternative approach is to use marginal beta-binomial distributions for the true positives and the true negatives, linked by copula distributions. In this article, we propose a new, nonparametric approach of analysis, which has greater flexibility with respect to the correlation structure, and always converges. In a simulation study, it becomes apparent that the empirical coverage of all three approaches is in general below the nominal level. Regarding bias, empirical coverage, and mean squared error the nonparametric model is often superior to the standard model, and comparable with the copula model. The three approaches are also applied to two example meta-analyses. PMID:26174020

  17. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    PubMed

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-04-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  18. Increased Prediction Accuracy in Wheat Breeding Trials Using a Marker × Environment Interaction Genomic Selection Model

    PubMed Central

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P.; Autrique, Enrique; de los Campos, Gustavo

    2015-01-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT’s research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis

  19. Increasing cutaneous afferent feedback improves proprioceptive accuracy at the knee in patients with sensory ataxia.

    PubMed

    Macefield, Vaughan G; Norcliffe-Kaufmann, Lucy; Goulding, Niamh; Palma, Jose-Alberto; Fuente Mora, Cristina; Kaufmann, Horacio

    2016-02-01

    Hereditary sensory and autonomic neuropathy type III (HSAN III) features disturbed proprioception and a marked ataxic gait. We recently showed that joint angle matching error at the knee is positively correlated with the degree of ataxia. Using intraneural microelectrodes, we also documented that these patients lack functional muscle spindle afferents but have preserved large-diameter cutaneous afferents, suggesting that patients with better proprioception may be relying more on proprioceptive cues provided by tactile afferents. We tested the hypothesis that enhancing cutaneous sensory feedback by stretching the skin at the knee joint using unidirectional elasticity tape could improve proprioceptive accuracy in patients with a congenital absence of functional muscle spindles. Passive joint angle matching at the knee was used to assess proprioceptive accuracy in 25 patients with HSAN III and 9 age-matched control subjects, with and without taping. Angles of the reference and indicator knees were recorded with digital inclinometers and the absolute error, gradient, and correlation coefficient between the two sides calculated. Patients with HSAN III performed poorly on the joint angle matching test [mean matching error 8.0 ± 0.8° (±SE); controls 3.0 ± 0.3°]. Following application of tape bilaterally to the knee in an X-shaped pattern, proprioceptive performance improved significantly in the patients (mean error 5.4 ± 0.7°) but not in the controls (3.0 ± 0.2°). Across patients, but not controls, significant increases in gradient and correlation coefficient were also apparent following taping. We conclude that taping improves proprioception at the knee in HSAN III, presumably via enhanced sensory feedback from the skin. PMID:26655817

  20. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html. PMID:24254576

  1. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  2. Convective Weather Forecast Accuracy Analysis at Center and Sector Levels

    NASA Technical Reports Server (NTRS)

    Wang, Yao; Sridhar, Banavar

    2010-01-01

    This paper presents a detailed convective forecast accuracy analysis at center and sector levels. The study is aimed to provide more meaningful forecast verification measures to aviation community, as well as to obtain useful information leading to the improvements in the weather translation capacity models. In general, the vast majority of forecast verification efforts over past decades have been on the calculation of traditional standard verification measure scores over forecast and observation data analyses onto grids. These verification measures based on the binary classification have been applied in quality assurance of weather forecast products at the national level for many years. Our research focuses on the forecast at the center and sector levels. We calculate the standard forecast verification measure scores for en-route air traffic centers and sectors first, followed by conducting the forecast validation analysis and related verification measures for weather intensities and locations at centers and sectors levels. An approach to improve the prediction of sector weather coverage by multiple sector forecasts is then developed. The weather severe intensity assessment was carried out by using the correlations between forecast and actual weather observation airspace coverage. The weather forecast accuracy on horizontal location was assessed by examining the forecast errors. The improvement in prediction of weather coverage was determined by the correlation between actual sector weather coverage and prediction. observed and forecasted Convective Weather Avoidance Model (CWAM) data collected from June to September in 2007. CWAM zero-minute forecast data with aircraft avoidance probability of 60% and 80% are used as the actual weather observation. All forecast measurements are based on 30-minute, 60- minute, 90-minute, and 120-minute forecasts with the same avoidance probabilities. The forecast accuracy analysis for times under one-hour showed that the errors in

  3. A method of increasing test range and accuracy of bioindicators: Geobacillus stearothermophilus spores.

    PubMed

    Lundahl, Gunnel

    2003-01-01

    Spores of Geobacillus stearothermophilus are very sensitive to changes in temperature. When validating sterilizing processes, the most common bioindicator (BI) is spores of Geobacillus stearothermophilus ATCC12980 and ATCC7953 with about 10(6) spores /BI and a D121-value of about 2 minutes in water. Because these spores of Geobacillus stearothermophilus do not survive at a F0-value above 12 minutes, it has not been possible to evaluate the agreement between the biological F-value (F(BIO)) and physical measurements (time and temperature) when the physical F0-value exceeds that limit. However, it has been proven that glycerin substantially increases the heat resistance of the spores, and it is possible to utilize that property when manufacturing BIs suitable to use in processes with longer sterilization time or high temperature (above 121 degrees C). By the method described, it is possible to make use of the sensitivity and durability of Geobacillus stearothermophilus' spores when glycerin has increased both test range and accuracy. Experience from years of development and validation work with the use of the highly sensitive glycerin-water-spore-suspension sensor (GWS-sensor) is reported. Validation of the steam sterilization process at high temperature has been possible with the use of GWS-sensors. It has also been shown that the spores in suspension keep their characteristics for a period of 19 months when stored cold (8 degrees C). PMID:14558699

  4. Analysis of instrumentation error effects on the identification accuracy of aircraft parameters

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.

    1972-01-01

    An analytical investigation is presented of the effect of unmodeled measurement system errors on the accuracy of aircraft stability and control derivatives identified from flight test data. Such error sources include biases, scale factor errors, instrument position errors, misalignments, and instrument dynamics. Two techniques (ensemble analysis and simulated data analysis) are formulated to determine the quantitative variations to the identified parameters resulting from the unmodeled instrumentation errors. The parameter accuracy that would result from flight tests of the F-4C aircraft with typical quality instrumentation is determined using these techniques. It is shown that unmodeled instrument errors can greatly increase the uncertainty in the value of the identified parameters. General recommendations are made of procedures to be followed to insure that the measurement system associated with identifying stability and control derivatives from flight test provides sufficient accuracy.

  5. Accuracy Analysis on Large Blocks of High Resolution Images

    NASA Technical Reports Server (NTRS)

    Passini, Richardo M.

    2007-01-01

    Although high altitude frequencies effects are removed at the time of basic image generation, low altitude (Yaw) effects are still present in form of affinity/angular affinity. They are effectively removed by additional parameters. Bundle block adjustment based on properly weighted ephemeris/altitude quaternions (BBABEQ) are not enough to remove the systematic effect. Moreover, due to the narrow FOV of the HRSI, position and altitude are highly correlated making it almost impossible to separate and remove their systematic effects without extending the geometric model (Self-Calib.) The systematic effects gets evident on the increase of accuracy (in terms of RMSE at GCPs) for looser and relaxed ground control at the expense of large and strong block deformation with large residuals at check points. Systematic errors are most freely distributed and their effects propagated all over the block.

  6. Oxytocin increases bias, but not accuracy, in face recognition line-ups.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Parris, Benjamin A; Bindemann, Markus; Udale, Robert; Bussunt, Amanda

    2015-07-01

    Previous work indicates that intranasal inhalation of oxytocin improves face recognition skills, raising the possibility that it may be used in security settings. However, it is unclear whether oxytocin directly acts upon the core face-processing system itself or indirectly improves face recognition via affective or social salience mechanisms. In a double-blind procedure, 60 participants received either an oxytocin or placebo nasal spray before completing the One-in-Ten task-a standardized test of unfamiliar face recognition containing target-present and target-absent line-ups. Participants in the oxytocin condition outperformed those in the placebo condition on target-present trials, yet were more likely to make false-positive errors on target-absent trials. Signal detection analyses indicated that oxytocin induced a more liberal response bias, rather than increasing accuracy per se. These findings support a social salience account of the effects of oxytocin on face recognition and indicate that oxytocin may impede face recognition in certain scenarios. PMID:25433464

  7. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  8. Geographic stacking: Decision fusion to increase global land cover map accuracy

    NASA Astrophysics Data System (ADS)

    Clinton, Nicholas; Yu, Le; Gong, Peng

    2015-05-01

    Techniques to combine multiple classifier outputs is an established sub-discipline in data mining, referred to as "stacking," "ensemble classification," or "meta-learning." Here we describe how stacking of geographically allocated classifications can create a map composite of higher accuracy than any of the individual classifiers. We used both voting algorithms and trainable classifiers with a set of validation data to combine individual land cover maps. We describe the generality of this setup in terms of existing algorithms and accuracy assessment procedures. This method has the advantage of not requiring posterior probabilities or level of support for predicted class labels. We demonstrate the technique using Landsat based, 30-meter land cover maps, the highest resolution, globally available product of this kind. We used globally distributed validation samples to composite the maps and compute accuracy. We show that geographic stacking can improve individual map accuracy by up to 6.6%. The voting methods can also achieve higher accuracy than the best of the input classifications. Accuracies from different classifiers, input data, and output type are compared. The results are illustrated on a Landsat scene in California, USA. The compositing technique described here has broad applicability in remote sensing based map production and geographic classification.

  9. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    EPA Science Inventory

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  10. Accuracy of 3D scanners in tooth mark analysis.

    PubMed

    Molina, Ana; Martin-de-las-Heras, Stella

    2015-01-01

    The objective of this study was to compare the accuracy of contact and laser 3D scanners in tooth mark analysis. Ten dental casts were scanned with both 3D scanners. Seven linear measurements were made from the 3D images of dental casts and biting edges generated with DentalPrint© software (University of Granada, Granada, Spain). The uncertainty value for contact 3D scanning was 0.833 for the upper dental cast and 0.660 mm for the lower cast; similar uncertainty values were found for 3D-laser scanning. Slightly higher uncertainty values were obtained for the 3D biting edges generated. The uncertainty values for single measurements ranged from 0.1 to 0.3 mm with the exception of the intercanine distance, in which higher values were obtained. Knowledge of the error rate in the 3D scanning of dental casts and biting edges is especially relevant to be applied in practical forensic cases. PMID:25388960

  11. Predictive accuracy of population viability analysis in conservation biology.

    PubMed

    Brook, B W; O'Grady, J J; Chapman, A P; Burgman, M A; Akçakaya, H R; Frankham, R

    2000-03-23

    Population viability analysis (PVA) is widely applied in conservation biology to predict extinction risks for threatened species and to compare alternative options for their management. It can also be used as a basis for listing species as endangered under World Conservation Union criteria. However, there is considerable scepticism regarding the predictive accuracy of PVA, mainly because of a lack of validation in real systems. Here we conducted a retrospective test of PVA based on 21 long-term ecological studies--the first comprehensive and replicated evaluation of the predictive powers of PVA. Parameters were estimated from the first half of each data set and the second half was used to evaluate the performance of the model. Contrary to recent criticisms, we found that PVA predictions were surprisingly accurate. The risk of population decline closely matched observed outcomes, there was no significant bias, and population size projections did not differ significantly from reality. Furthermore, the predictions of the five PVA software packages were highly concordant. We conclude that PVA is a valid and sufficiently accurate tool for categorizing and managing endangered species. PMID:10746724

  12. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  13. Unconscious Reward Cues Increase Invested Effort, but Do Not Change Speed-Accuracy Tradeoffs

    ERIC Educational Resources Information Center

    Bijleveld, Erik; Custers, Ruud; Aarts, Henk

    2010-01-01

    While both conscious and unconscious reward cues enhance effort to work on a task, previous research also suggests that conscious rewards may additionally affect speed-accuracy tradeoffs. Based on this idea, two experiments explored whether reward cues that are presented above (supraliminal) or below (subliminal) the threshold of conscious…

  14. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis.

    PubMed

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  15. Using student-managed interventions to increase homework completion and accuracy

    PubMed Central

    Olympia, Daniel E.; Sheridan, Susan M.; Jenson, William R.; Andrews, Debra

    1994-01-01

    We examined the effectiveness of self-managed individual and group contingency procedures in improving the completion and accuracy rates of daily mathematics homework assignments. A group of sixth-grade students having homework difficulties in mathematics were selected for the study. There was substantial improvement in the amount of homework completed over baseline for a majority of the students, whereas the results for accuracy were mixed. Students who participated in the self-management training made significant gains on standardized measures of academic achievement and curriculum-based measures of classroom performance. Parents also reported significantly fewer problems associated with homework completion following the intervention. Students who were allowed to select their own performance goals made superior improvements in the number of homework assignments returned compared to students who were given a specified goal by the classroom teacher. Parents, subjects, and the classroom teacher responded positively on consumer satisfaction measures following termination of the study. PMID:16795827

  16. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  17. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  18. A Model Based Approach to Increase the Part Accuracy in Robot Based Incremental Sheet Metal Forming

    SciTech Connect

    Meier, Horst; Laurischkat, Roman; Zhu Junhong

    2011-01-17

    One main influence on the dimensional accuracy in robot based incremental sheet metal forming results from the compliance of the involved robot structures. Compared to conventional machine tools the low stiffness of the robot's kinematic results in a significant deviation of the planned tool path and therefore in a shape of insufficient quality. To predict and compensate these deviations offline, a model based approach, consisting of a finite element approach, to simulate the sheet forming, and a multi body system, modeling the compliant robot structure, has been developed. This paper describes the implementation and experimental verification of the multi body system model and its included compensation method.

  19. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  20. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  1. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment.

    PubMed

    McDevitt, J; Appiah-Kubi, K O; Tierney, R; Wright, W G

    2016-08-01

    In this study, we collected and analyzed preliminary data for the internal consistency of a new condensed model to assess vestibular and oculomotor impairments following a concussion. We also examined this model's ability to discriminate concussed athletes from healthy controls. Each participant was tested in a concussion assessment protocol that consisted of the Neurocom's Sensory Organization Test (SOT), Balance Error Scoring System exam, and a series of 8 vestibular and oculomotor assessments. Of these 10 assessments, only the SOT, near point convergence, and the signs and symptoms (S/S) scores collected following optokinetic stimulation, the horizontal eye saccades test, and the gaze stabilization test were significantly correlated with health status, and were used in further analyses. Multivariate logistic regression for binary outcomes was employed and these beta weights were used to calculate the area under the receiver operating characteristic curve ( area under the curve). The best model supported by our findings suggest that an exam consisting of the 4 SOT sensory ratios, near point convergence, and the optokinetic stimulation signs and symptoms score are sensitive in discriminating concussed athletes from healthy controls (accuracy=98.6%, AUC=0.983). However, an even more parsimonious model consisting of only the optokinetic stimulation and gaze stabilization test S/S scores and near point convergence was found to be a sensitive model for discriminating concussed athletes from healthy controls (accuracy=94.4%, AUC=0.951) without the need for expensive equipment. Although more investigation is needed, these findings will be helpful to health professionals potentially providing them with a sensitive and specific battery of simple vestibular and oculomotor assessments for concussion management. PMID:27176886

  2. Radiometric and Geometric Accuracy Analysis of Rasat Pan Imagery

    NASA Astrophysics Data System (ADS)

    Kocaman, S.; Yalcin, I.; Guler, M.

    2016-06-01

    RASAT is the second Turkish Earth Observation satellite which was launched in 2011. It operates with pushbroom principle and acquires panchromatic and MS images with 7.5 m and 15 m resolutions, respectively. The swath width of the sensor is 30 km. The main aim of this study is to analyse the radiometric and geometric quality of RASAT images. A systematic validation approach for the RASAT imagery and its products is being applied. RASAT image pair acquired over Kesan city in Edirne province of Turkey are used for the investigations. The raw RASAT data (L0) are processed by Turkish Space Agency (TUBITAK-UZAY) to produce higher level image products. The image products include radiometrically processed (L1), georeferenced (L2) and orthorectified (L3) data, as well as pansharpened images. The image quality assessments include visual inspections, noise, MTF and histogram analyses. The geometric accuracy assessment results are only preliminary and the assessment is performed using the raw images. The geometric accuracy potential is investigated using 3D ground control points extracted from road intersections, which were measured manually in stereo from aerial images with 20 cm resolution and accuracy. The initial results of the study, which were performed using one RASAT panchromatic image pair, are presented in this paper.

  3. Utility of an Algorithm to Increase the Accuracy of Medication History in an Obstetrical Setting

    PubMed Central

    Corbel, Aline; Baud, David; Chaouch, Aziz; Beney, Johnny; Csajka, Chantal; Panchaud, Alice

    2016-01-01

    Background In an obstetrical setting, inaccurate medication histories at hospital admission may result in failure to identify potentially harmful treatments for patients and/or their fetus(es). Methods This prospective study was conducted to assess average concordance rates between (1) a medication list obtained with a one-page structured medication history algorithm developed for the obstetrical setting and (2) the medication list reported in medical records and obtained by open-ended questions based on standard procedures. Both lists were converted into concordance rate using a best possible medication history approach as the reference (information obtained by patients, prescribers and community pharmacists’ interviews). Results The algorithm-based method obtained a higher average concordance rate than the standard method, with respectively 90.2% [CI95% 85.8–94.3] versus 24.6% [CI95%15.3–34.4] concordance rates (p<0.01). Conclusion Our algorithm-based method strongly enhanced the accuracy of the medication history in our obstetric population, without using substantial resources. Its implementation is an effective first step to the medication reconciliation process, which has been recognized as a very important component of patients’ drug safety. PMID:26999743

  4. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  5. Increasing accuracy and throughput in large-scale microsatellite fingerprinting of cacao field germplasm collections

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Microsatellite-based DNA fingerprinting has been increasingly applied in crop genebank management. However, efficiency and cost-saving remain a major challenge for large scale genotyping, even when middle or high throughput genotyping facility is available. In this study we report on increasing the...

  6. Increased accuracy of batch fecundity estimates using oocyte stage ratios in Plectropomus leopardus.

    PubMed

    Carter, A B; Williams, A J; Russ, G R

    2009-08-01

    Using the ratio of the number of migratory nuclei to hydrated oocytes to estimate batch fecundity of common coral trout Plectropomus leopardus increases the time over which samples can be collected and, therefore, increases the sample size available and reduces biases in batch fecundity estimates. PMID:20738569

  7. Increasing accuracy of daily evapotranspiration through synergistic use of MSG and MERIS/AATSR

    NASA Astrophysics Data System (ADS)

    Timmermans, Joris; van der Tol, Christiaan; Su, Zhongbo

    2010-05-01

    Daily Evapotranspiration estimates are important in many applications. Evapotranspiration plays a significant role in the water, energy and carbon cycles. Through these cycles evapotranspiration is important for monitoring droughts, managing agricultural irrigation, and weather forecast modeling. Drought levels and irrigation needs can be calculated from evapotranspiration because evapotranspiration estimates give a direct indication on the health and growth rate of crops. The evaporation of the soil and open water bodies and transpiration from plants combine as a lower forcing boundary parameter to the atmosphere affecting local and regional weather patterns. Evapotranspiration can be estimated using different techniques: ground measurements, hydrological modeling, and remote sensing algorithms. The first two techniques are not suitable for large scale estimation of evapotranspiration. Ground measurements are only valid within a small footprint area; and hydrological modelling requires intensive knowledge of a too large amount of processes. The advantage of remote sensing algorithms is that they are capable of estimating the evapotranspiration over large scales with a limited amount of parameters. In remote sensing a trade off exists between temporal and spatial resolution. Geostationary satellites have high temporal resolution but have a low spatial resolution, where near-Polar Orbiting satellites have high spatial resolution but have low temporal resolution. For example the SEVIRI sensor on the Meteosat Second Generation (MSG) satellite acquires images every 15 minutes with a resolution of 3km, where the AATSR/MERIS combination of the ENVISAT satellite has a revisit time of several days with a 1km resolution. Combining the advantages of geostationary satellites and polar-orbiting satellites will greatly improve the accuracy of the daily evapotranspiration estimates. Estimating daily evapotranspiration from near-polar orbiting satellites requires a method to

  8. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  9. Analysis of cost and accuracy of alternative strategies for Enterobacteriaceae identification.

    PubMed

    Robertson, E A; Macks, G C; MacLowry, J D

    1976-04-01

    Analysis of the cost of time and material required for the diagnosis of Enterobacteriacea isolates indicated that a conventional 17-tube (20-test) setup costs $7.98 per isolated identified. Using the API 20E, a similar identification cost $3.02. A conventional 7-tube (10-test) setup cost $3.60, whereas the comparable cost by 30% while increasing the number of isolate identified correctly by 3%. Other strategres using the API 20E or a deoxyribounclease test were also evaluated for cost and accuracy. PMID:770498

  10. Increasing the accuracy and automation of fractional vegetation cover estimation from digital photographs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The use of automated methods to estimate canopy cover (CC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive CC measurements. Wide acceptance has been delayed because of the limitations of these methods. This work introduces a novel ...

  11. Repeating a Monologue under Increasing Time Pressure: Effects on Fluency, Complexity, and Accuracy

    ERIC Educational Resources Information Center

    Thai, Chau; Boers, Frank

    2016-01-01

    Studies have shown that learners' task performance improves when they have the opportunity to repeat the task. Conditions for task repetition vary, however. In the 4/3/2 activity, learners repeat a monologue under increasing time pressure. The purpose is to foster fluency, but it has been suggested in the literature that it also benefits other…

  12. High Frequency rTMS over the Left Parietal Lobule Increases Non-Word Reading Accuracy

    ERIC Educational Resources Information Center

    Costanzo, Floriana; Menghini, Deny; Caltagirone, Carlo; Oliveri, Massimiliano; Vicari, Stefano

    2012-01-01

    Increasing evidence in the literature supports the usefulness of Transcranial Magnetic Stimulation (TMS) in studying reading processes. Two brain regions are primarily involved in phonological decoding: the left superior temporal gyrus (STG), which is associated with the auditory representation of spoken words, and the left inferior parietal lobe…

  13. The increase of ultrasound measurements accuracy with the use of two-frequency sounding

    NASA Astrophysics Data System (ADS)

    Shulgina, Yu V.; Soldatov, A. I.; Rozanova, Ya V.; Soldatov, A. A.; Shulgin, E. M.

    2015-04-01

    In the article the new method for detection of the temporary position of the received echo signal is considered. The method consists in successive emission of sounded impulses on two frequencies and also the current study is concerned with the analysis of ultrasound fluctuation propagation time to and from the deflector on every frequency. The detailed description of the mathematical tool is presented in the article. The math tool used allows the authors to decrease the measurement error with help of calculations needed.

  14. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  15. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  16. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  17. Increased ephemeris accuracy using attitude-dependent aerodynamic force coefficients for inertially stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Baker, David F.

    1991-01-01

    The FREEMAC program used to generate the aerodynamic coefficients, as well as associated routines that allow the results to be used in other software is described. These capabilities are applied in two numerical examples to the short-term orbit prediction of the Gamma Ray Observatory (GRO) and Hubble Space Telescope (HST) spacecraft. Predictions using attitude-dependent aerodynamic coefficients were made on a modified version of the PC-based Ephemeris Generation Program (EPHGEN) and were compared to definitive orbit solutions obtained from actual tracking data. The numerical results show improvement in the predicted semi-major axis and along-track positions that would seem to be worth the added computational effort. Finally, other orbit and attitude analysis applications are noted that could profit from using FREEMAC-calculated aerodynamic coefficients, including orbital lifetime studies, orbit determination methods, attitude dynamics simulators, and spacecraft control system component sizing.

  18. Coupled Loads Analysis Accuracy from the Space Vehicle Perspective

    NASA Astrophysics Data System (ADS)

    Dickens, J. M.; Wittbrodt, M. J.; Gate, M. M.; Li, L. H.; Stroeve, A.

    2001-01-01

    Coupled loads analysis (CLA) consists of performing a structural response analysis, usually a time-history response analysis, with reduced dynamic models typically provided by two different companies to obtain the coupled response of a launch vehicle and space vehicle to the launching and staging events required to place the space vehicle into orbit. The CLA is performed by the launch vehicle contractor with a reduced dynamics mathematical model that is coupled to the launch vehicle, or booster, model to determine the coupled loads for each substructure. Recently, the booster and space vehicle contractors have been from different countries. Due to the language differences and governmental restrictions, the verification of the CLA is much more difficult than when working with launch vehicle and space vehicle contractors of the same country. This becomes exceedingly clear when the CLA analysis results do not seem to pass an intuitive judgement. Presented in the sequel are three checks that a space vehicle contractor can perform on the results of a coupled loads analysis to partially verify the analysis.

  19. Increasing the accuracy in the application of global ionospheric maps computed from GNSS data

    NASA Astrophysics Data System (ADS)

    Hernadez-Pajarez, Manuel; Juan, Miguel; Sanz, Jaume; Garcia-Rigo, Alberto

    2013-04-01

    Since June 1998 the Technical University of Catalonia (UPC) is contributing to the International GNSS Service (IGS) by providing global maps of Vertical Total Electron Content (Vertical TEC or VTEC) of the Ionosphere, computed with global tomographic modelling from dual-frequency GNSS measurements of the global IGS network. Due to the IGS requirements, in order to facilitate the combination of different global VTEC products from different analysis centers (computed with different techniques and softwares) in a common product, such global ionospheric maps have been provided in a two-dimension (2D) description (VTEC), in spite of they were computed from the very beginning with a tomographic model, estimating separately top and bottomside electron content (see above mentioned references). In this work we present the study of the impact of incorporating the raw vertical distribution of electron content (preserved from the original UPC tomographic runs) in the algorithm of retrieving a given Slant TEC (STEC) for a given receiver-transmitter line-of-sight and time, as a "companion-map" of the original UPC global VTEC map distributed through IGS servers in IONEX format. The performance will be evaluated taking as ground truth the very accurate STEC difference values provided by the direct GNSS observation in a continuous arch of dual-frequency data (for a given GNSS satellite-receiver pair) for several receivers worldwide distributed which have not been involved in the computation of global VTEC maps.

  20. Cytopathological Analysis of Cyst Fluid Enhances Diagnostic Accuracy of Mucinous Pancreatic Cystic Neoplasms

    PubMed Central

    Utomo, Wesley K.; Braat, Henri; Bruno, Marco J.; van Eijck, Casper H.J.; Koerkamp, Bas Groot; Krak, Nanda C.; van de Vreede, Adriaan; Fuhler, Gwenny M.; Peppelenbosch, Maikel P.; Biermann, Katharina

    2015-01-01

    Abstract Widespread use of cross-sectional imaging and increasing age of the general population has increased the number of detected pancreatic cystic lesions. However, several pathological entities with a variety in malignant potential have to be discriminated to allow clinical decision making. Discrimination between mucinous pancreatic cystic neoplasms (PCNs) and nonmucinous pancreatic lesions is the primary step in the clinical work-up, as malignant transformation is mostly associated with mucinous PCN. We performed a retrospective analysis of all resected PCN in our tertiary center from 2000 to 2014, to evaluate preoperative diagnostic performance and the results of implementation of the consensus guidelines over time. This was followed by a prospective cohort study of patients with an undefined pancreatic cyst, where the added value of cytopathological mucin evaluation to carcinoembryonic antigen (CEA) in cyst fluid for the discrimination of mucinous PCN and nonmucinous cysts was investigated. Retrospective analysis showed 115 patients operated for a PCN, with a correct preoperative classification in 96.2% of the patients. High-grade dysplasia or invasive carcinoma was observed in only 32.3% of mucinous PCN. In our prospective cohort (n = 71), 57.7% of patients were classified as having a mucinous PCN. CEA ≥192 ng/mL had an accuracy of 63.4%, and cytopathological mucin evaluation an accuracy of 73.0%. Combining these 2 tests further improved diagnostic accuracy of a mucinous PCN to 76.8%. CEA level and mucin evaluation were not predictive of the degree of dysplasia. These findings show that adding cytopathology to cyst fluid biochemistry improves discrimination between mucinous PCN and nonmucinous cysts.

  1. ssDNA Pairing Accuracy Increases When Abasic Sites Divide Nucleotides into Small Groups

    PubMed Central

    Peacock-Villada, Alexandra; Coljee, Vincent; Danilowicz, Claudia; Prentiss, Mara

    2015-01-01

    Accurate sequence dependent pairing of single-stranded DNA (ssDNA) molecules plays an important role in gene chips, DNA origami, and polymerase chain reactions. In many assays accurate pairing depends on mismatched sequences melting at lower temperatures than matched sequences; however, for sequences longer than ~10 nucleotides, single mismatches and correct matches have melting temperature differences of less than 3°C. We demonstrate that appropriately grouping of 35 bases in ssDNA using abasic sites increases the difference between the melting temperature of correct bases and the melting temperature of mismatched base pairings. Importantly, in the presence of appropriately spaced abasic sites mismatches near one end of a long dsDNA destabilize the annealing at the other end much more effectively than in systems without the abasic sites, suggesting that the dsDNA melts more uniformly in the presence of appropriately spaced abasic sites. In sum, the presence of appropriately spaced abasic sites allows temperature to more accurately discriminate correct base pairings from incorrect ones. PMID:26115175

  2. Increase of Readability and Accuracy of 3d Models Using Fusion of Close Range Photogrammetry and Laser Scanning

    NASA Astrophysics Data System (ADS)

    Gašparović, M.; Malarić, I.

    2012-07-01

    The development of laser scanning technology has opened a new page in geodesy and enabled an entirely new way of presenting data. Products obtained by the method of laser scanning are used in many sciences, as well as in archaeology. It should be noted that 3D models of archaeological artefacts obtained by laser scanning are fully measurable, written in 1:1 scale and have high accuracy. On the other hand, texture and RGB values of the surface of the object obtained by a laser scanner have lower resolution and poorer radiometric characteristics in relation to the textures captured with a digital camera. Scientific research and the goal of this paper are to increase the accuracy and readability of the 3D model with textures obtained with a digital camera. Laser scanning was performed with triangulation scanner of high accuracy, Vivid 9i (Konica Minolta), while for photogrammetric recording digital camera Nikon D90 with a lens of fixed focal length 20 mm, was used. It is important to stress that a posteriori accuracy score of the global registration of point clouds in the form of the standard deviation was ± 0.136 mm while the average distance was only ± 0.080 mm. Also research has proven that the quality projection texture model increases readability. Recording of archaeological artefacts and making their photorealistic 3D model greatly contributes to archaeology as a science, accelerates processing and reconstruction of the findings. It also allows the presentation of findings to the general public, not just to the experts.

  3. Tourniquet Test for Dengue Diagnosis: Systematic Review and Meta-analysis of Diagnostic Test Accuracy

    PubMed Central

    Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C.

    2016-01-01

    Background Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). Purpose To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. Data Sources A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Study Selection Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Data Extraction Two independent authors extracted data using a standardized form. Data Synthesis A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66–0.74). Conclusion The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. Registration The protocol for this systematic review was registered at PROSPERO: CRD42015020323. PMID:27486661

  4. In pursuit of virtual lead optimization: Pruning ensembles of receptor structures for increased efficiency and accuracy during docking

    PubMed Central

    Bolstad, Erin S. D.; Anderson, Amy C.

    2008-01-01

    Representing receptors as ensembles of protein conformations during docking is a powerful method to approximate protein flexibility and increase the accuracy of the resulting ranked list of compounds. Unfortunately, docking compounds against a large number of ensemble members can increase computational cost and time investment. In this manuscript, we present an efficient method to evaluate and select the most contributive ensemble members prior to docking for targets with a conserved core of residues that bind a ligand moiety. We observed that ensemble members that preserve the geometry of the active site core are most likely to place ligands in the active site with a conserved orientation, generally rank ligands correctly and increase interactions with the receptor. A relative distance approach is used to quantify the preservation of the three-dimensional interatomic distances of the conserved ligand-binding atoms and prune large ensembles quickly. In this study, we investigate dihydrofolate reductase as an example of a protein with a conserved core; however, this method for accurately selecting relevant ensemble members a priori can be applied to any system with a conserved ligand-binding core, including HIV-1 protease, kinases and acetylcholinesterase. Representing a drug target as a pruned ensemble during in silico screening should increase the accuracy and efficiency of high throughput analyses of lead analogs. PMID:18781587

  5. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    PubMed

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  6. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID) Tracking Accuracy with the Kriging Method

    PubMed Central

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  7. Line-shapes analysis with ultra-high accuracy

    NASA Astrophysics Data System (ADS)

    Wcisło, Piotr; Cygan, Agata; Lisak, Daniel; Ciuryło, Roman

    2014-11-01

    We present analysis of the R7 Q8 O2 B-band rovibronic transition measured with ultra-high signal-to-noise ratio by Pound-Drever-Hall-locked frequency-stabilized cavity-ring- down spectroscopy. For line-shape calculations ab intio in spirt approach was used based on numerical solution of the proper transport/relaxation equation. Consequences for spectroscopic determination of the Boltzmann constant as well as precise determination of the line position in the Doppler limited spectroscopy are indicated.

  8. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  9. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  10. Surface Accuracy Analysis of Single Panels for the Shanghai 65-M Radio Telescope

    NASA Astrophysics Data System (ADS)

    Fu, Li; Liu, Guoxi; Jin, Chao; Yan, Feng; An, Tao; Zhiqiang, Shen

    We presented the surface accuracy measurements of 5 single panels of the Shanghai 65-meter radio telescope by employing the coordinate measuring machine and laser tracker. The measurement data obtained from the two instruments were analyzed with the common point transformation and CAD surface fitting techniques, respectively. The derived rms uncertainties of panel accuracy from two methods are consistent with each other, and both match the design specification. The simulations of the effects of manufacturing error, gravity, temperature and wind on the panel surface accuracy with the finite element analysis method suggest that the first two factors account for primary sources of the accuracy uncertainty. The panel deformation under concentrated load was analyzed through finite element analysis and experiment, and the comparison error is 5.6%. There is not plastic deformation when people of weight below 70kg installs and remedies the panel.

  11. Analysis of visual plotting accuracy and sporadic pollution and consequences for shower association.

    NASA Astrophysics Data System (ADS)

    Koschack, R.

    1991-12-01

    An analysis of the plotting accuracy and of the sporadic pollution for visual meteor observations is given. It is found that both factors limit the observability of minor showers to ZHR ≥ 3. Based on the results of the analysis, rules are developed for minor shower observations.

  12. Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.

    2012-07-01

    This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.

  13. Long-term deflections of reinforced concrete elements: accuracy analysis of predictions by different methods

    NASA Astrophysics Data System (ADS)

    Gribniak, Viktor; Bacinskas, Darius; Kacianauskas, Rimantas; Kaklauskas, Gintaris; Torres, Lluis

    2013-08-01

    Long-term deflection response of reinforced concrete flexural members is influenced by the interaction of complex physical phenomena, such as concrete creep, shrinkage and cracking, which makes their prediction difficult. A number of approaches are proposed by design codes with different degrees of simplification and accuracy. This paper statistically investigates accuracy of long-term deflection predictions made by some of the most widely used design codes ( Eurocode 2, ACI 318, ACI 435, and the new Russian code SP 52-101) and a numerical technique proposed by the authors. The accuracy is analyzed using test data of 322 reinforced concrete members from 27 test programs reported in the literature. The predictions of each technique are discussed, and a comparative analysis is made showing the influence of different parameters, such as sustained loading duration, compressive strength of concrete, loading intensity and reinforcement ratio, on the prediction accuracy.

  14. Increased accuracy of species lists developed for alpine lakes using morphology and cytochrome oxidase I for identification of specimens.

    PubMed

    Deiner, Kristy; Knapp, Roland A; Boiano, Daniel M; May, Bernie

    2013-09-01

    The first step in many community ecology studies is to produce a species list from a sample of individuals. Community ecologists now have two viable ways of producing a species list: morphological and barcode identification. In this study, we compared the taxonomic resolution gained by a combined use of both methods and tested whether a change in taxonomic resolution significantly impacted richness estimates for benthic macroinvertebrates sampled from ten lakes in Sequoia National Park, USA. Across all lakes, 77 unique taxa were identified and 42% (32) were reliably identified to species using both barcode and morphological identification. Of the 32 identified to species, 63% (20) were identified solely by comparing the barcode sequence from cytochrome oxidase I to the Barcode of Life reference library. The increased resolution using a combined identification approach compared to identifications based solely on morphology resulted in a significant increase in estimated richness within a lake at the order, family, genus and species levels of taxonomy (P < 0.05). Additionally, young or damaged individuals that could not be identified using morphology were identified using their COI sequences to the genus or species level on average 75% of the time. Our results demonstrate that a combined identification approach improves accuracy of benthic macroinvertebrate species lists in alpine lakes and subsequent estimates of richness. We encourage the use of barcodes for identification purposes and specifically when morphology is insufficient, as in the case of damaged and early life stage specimens of benthic macroinvertebrates. PMID:23773698

  15. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  16. There's a Bug in Your Ear!: Using Technology to Increase the Accuracy of DTT Implementation

    ERIC Educational Resources Information Center

    McKinney, Tracy; Vasquez, Eleazar, III.

    2014-01-01

    Many professionals have successfully implemented discrete trial teaching in the past. However, there have not been extensive studies examining the accuracy of discrete trial teaching implementation. This study investigated the use of Bug in Ear feedback on the accuracy of discrete trial teaching implementation among two pre-service teachers…

  17. Accuracy of mucocutaneous leishmaniasis diagnosis using polymerase chain reaction: systematic literature review and meta-analysis

    PubMed Central

    Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro

    2015-01-01

    The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238

  18. Inclusion of quality controls on leishmaniases molecular tests to increase diagnostic accuracy in research and reference laboratories.

    PubMed

    da C Gonçalves-de-Albuquerque, Suênia; Pessoa-e-Silva, Rômulo; Trajano-Silva, Lays A M; de Morais, Rayana C S; Brandão-Filho, Sinval P; de Paiva-Cavalcanti, Milena

    2015-04-01

    Early detection of leishmaniases and prompt institution of treatment are paramount for individuals and communities affected by these diseases. To overcome the remaining limitations inherent to molecular methods currently used and to ensure the accuracy of results in leishmaniases diagnosis, two triplex polymerase chain reaction (PCR) assays with quality controls for the reactions were developed. Validity indicators were assessed in 186 dog blood samples from endemic areas in Brazil. The level of agreement between the new tools and their singleplex protocols was assessed by kappa analysis. The triplex PCR for visceral leishmaniasis showed sensitivity (S) = 78.68 %, specificity (E) = 85.29 %, and efficiency (e) = 81.05 %. The cutaneous leishmaniasis protocol showed S = 97.29 %, E = 79.16 %, and e = 90.16 %. Both protocols showed good agreement with gold standards. These new tools enable, in a single reaction, the diagnosis of the diseases and the evaluation of the sample quality and DNA extraction process, thus reducing the cost of reagents and avoiding the eventual need for collecting a second sample. PMID:25428552

  19. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. PMID:27423958

  20. Lorentzian’ analysis of the accuracy of modern catalogues of stellar positions

    NASA Astrophysics Data System (ADS)

    Varaksina, N. Y.; Nefedyev, Y. A.; Churkin, K. O.; Zabbarova, R. R.; Demin, S. A.

    2015-12-01

    There is a new approach for the estimation of the position accuracy and proper motions of the stars in astrometric catalogues by comparison of the stars' positions in the researched and Hipparcos catalogues in different periods, but under a standard equinox. To verify this method was carried out the analysis of the star positions and proper motions UCAC2, PPM, ACRS, Tycho-2, ACT, TRC, FON and Tycho catalogues. As a result of this study was obtained that the accuracy of positions and proper motions of the stars in Tycho-2 and UCAC2 catalogues are approximately equal. The results of the comparison are represented graphically.

  1. Accuracy analysis of the space shuttle solid rocket motor profile measuring device

    NASA Technical Reports Server (NTRS)

    Estler, W. Tyler

    1989-01-01

    The Profile Measuring Device (PMD) was developed at the George C. Marshall Space Flight Center following the loss of the Space Shuttle Challenger. It is a rotating gauge used to measure the absolute diameters of mating features of redesigned Solid Rocket Motor field joints. Diameter tolerance of these features are typically + or - 0.005 inches and it is required that the PMD absolute measurement uncertainty be within this tolerance. In this analysis, the absolute accuracy of these measurements were found to be + or - 0.00375 inches, worst case, with a potential accuracy of + or - 0.0021 inches achievable by improved temperature control.

  2. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. Russell; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the mission's commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  3. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  4. Orbit Determination Accuracy Analysis of the Magnetospheric Multiscale Mission During Perigee Raise

    NASA Technical Reports Server (NTRS)

    Pachura, Daniel A.; Vavrina, Matthew A.; Carpenter, J. R.; Wright, Cinnamon A.

    2014-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) will provide orbit determination and prediction support for the Magnetospheric Multiscale (MMS) mission during the missions commissioning period. The spacecraft will launch into a highly elliptical Earth orbit in 2015. Starting approximately four days after launch, a series of five large perigee-raising maneuvers will be executed near apogee on a nearly every-other-orbit cadence. This perigee-raise operations concept requires a high-accuracy estimate of the orbital state within one orbit following the maneuver for performance evaluation and a high-accuracy orbit prediction to correctly plan and execute the next maneuver in the sequence. During early mission design, a linear covariance analysis method was used to study orbit determination and prediction accuracy for this perigee-raising campaign. This paper provides a higher fidelity Monte Carlo analysis using the operational COTS extended Kalman filter implementation that was performed to validate the linear covariance analysis estimates and to better characterize orbit determination performance for actively maneuvering spacecraft in a highly elliptical orbit. The study finds that the COTS extended Kalman filter tool converges on accurate definitive orbit solutions quickly, but prediction accuracy through orbits with very low altitude perigees is degraded by the unpredictability of atmospheric density variation.

  5. [Effect of different distribution of components concentration on the accuracy of quantitative spectral analysis].

    PubMed

    Li, Gang; Zhao, Zhe; Wang, Hui-Quan; Lin, Ling; Zhang, Bao-Ju; Wu, Xiao-Rong

    2012-07-01

    In order to discuss the effect of different distribution of components concentration on the accuracy of quantitative spectral analysis, according to the Lambert-Beer law, ideal absorption spectra of samples with three components were established. Gaussian noise was added to the spectra. Correction and prediction models were built by partial least squares regression to reflect the unequal modeling and prediction results between different distributions of components. Results show that, in the case of pure linear absorption, the accuracy of model is related to the distribution of components concentration. Not only to the component we focus on, but also to the non-tested components, the larger covered and more uniform distribution is a significant point of calibration set samples to establish a universal model and provide a satisfactory accuracy. This research supplies a theoretic guidance for reasonable choice of samples with suitable concentration distribution, which enhances the quality of model and reduces the prediction error of the predict set. PMID:23016350

  6. Analysis of the Accuracy and Robustness of the Leap Motion Controller

    PubMed Central

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-01-01

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678

  7. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  8. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  9. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  10. Preliminary navigation accuracy analysis for the TDRSS Onboard Navigation System (TONS) experiment on EP/EUVE

    NASA Technical Reports Server (NTRS)

    Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.

    1991-01-01

    A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.

  11. Whole-body predictors of wrist shot accuracy in ice hockey: a kinematic analysis.

    PubMed

    Michaud-Paquette, Yannick; Magee, Patrick; Pearsall, David; Turcotte, René

    2011-03-01

    The purpose of this study was to identify joint angular kinematics that corresponds to shooting accuracy in the stationary ice hockey wrist shot. Twenty-four subjects participated in this study, each performing 10 successful shots on four shooting targets. An eight-camera infra-red motion capture system (240 Hz), along with passive reflective markers, was used to record motion of the joints, hockey stick, and puck throughout the performance of the wrist shot. A multiple regression analysis was carried out to examine whole-body kinematic variables with accuracy scores as the dependent variable. Significant accuracy predictors were identified in the lower limbs, torso and upper limbs. Interpretation of the kinematics suggests that characteristics such as a better stability of the base of support, momentum cancellation, proper trunk orientation and a more dynamic control of the lead arm throughout the wrist shot movement are presented as predictors for the accuracy outcome. These findings are substantial as they not only provide a framework for further analysis of motor control strategies using tools for accurate projection of objects, but more tangibly they may provide a comprehensive evidence-based guide to coaches and athletes for planned training to improve performance. PMID:21560748

  12. Accuracy of bite mark analysis from food substances: A comparative study

    PubMed Central

    Daniel, M. Jonathan; Pazhani, Ambiga

    2015-01-01

    Aims and Objectives: The aims and objectives of the study were to compare the accuracy of bite mark analysis from three different food substances-apple, cheese and chocolate using two techniques-the manual docking procedure and computer assisted overlay generation technique and to compare the accuracy of the two techniques for bite mark analysis on food substances. Materials and Methods: The individuals who participated in the study were made to bite on three food substances-apple, cheese, and chocolate. Dentate individuals were included in the study. Edentulous individuals and individuals having a missing anterior tooth were excluded from the study. The dental casts of the individual were applied to the positive cast of the bitten food substance to determine docking or matching. Then, computer generated overlays were compared with bite mark pattern on the foodstuff. Results: The results were tabulated and the comparison of bite mark analysis on the three different food substances was analyzed by Kruskall-Wallis ANOVA test and the comparison of the two techniques was analyzed by Spearman's Rho correlation coefficient. Conclusion: On comparing the bite marks analysis from the three food substances-apple, cheese and chocolate, the accuracy was found to be greater for chocolate and cheese than apple. PMID:26816463

  13. Accuracy of urea breath test in Helicobacter pylori infection: Meta-analysis

    PubMed Central

    Ferwana, Mazen; Abdulmajeed, Imad; Alhajiahmed, Ali; Madani, Wedad; Firwana, Belal; Hasan, Rim; Altayar, Osama; Limburg, Paul J; Murad, Mohammad Hassan; Knawy, Bandar

    2015-01-01

    AIM: To quantitatively summarize and appraise the available evidence of urea breath test (UBT) use to diagnose Helicobacter pylori (H. pylori) infection in patients with dyspepsia and provide pooled diagnostic accuracy measures. METHODS: We searched MEDLINE, EMBASE, Cochrane library and other databases for studies addressing the value of UBT in the diagnosis of H. pylori infection. We included cross-sectional studies that evaluated the diagnostic accuracy of UBT in adult patients with dyspeptic symptoms. Risk of bias was assessed using QUADAS (Quality Assessment of Diagnostic Accuracy Studies)-2 tool. Diagnostic accuracy measures were pooled using the random-effects model. Subgroup analysis was conducted by UBT type (13C vs 14C) and by measurement technique (Infrared spectrometry vs Isotope Ratio Mass Spectrometry). RESULTS: Out of 1380 studies identified, only 23 met the eligibility criteria. Fourteen studies (61%) evaluated 13C UBT and 9 studies (39%) evaluated 14C UBT. There was significant variation in the type of reference standard tests used across studies.Pooled sensitivity was 0.96 (95%CI: 0.95-0.97) andpooled specificity was 0.93 (95%CI: 0.91-0.94). Likelihood ratio for a positive test was 12 and for a negative test was 0.05 with an area under thecurve of 0.985. Meta-analyses were associated with a significant statistical heterogeneity that remained unexplained after subgroup analysis. The included studies had a moderate risk of bias. CONCLUSION: UBT has high diagnostic accuracy for detecting H. pylori infection in patients with dyspepsia. The reliability of diagnostic meta-analytic estimates however is limited by significant heterogeneity. PMID:25632206

  14. Increased Proportion of Variance Explained and Prediction Accuracy of Survival of Breast Cancer Patients with Use of Whole-Genome Multiomic Profiles.

    PubMed

    Vazquez, Ana I; Veturi, Yogasudha; Behring, Michael; Shrestha, Sadeep; Kirst, Matias; Resende, Marcio F R; de Los Campos, Gustavo

    2016-07-01

    Whole-genome multiomic profiles hold valuable information for the analysis and prediction of disease risk and progression. However, integrating high-dimensional multilayer omic data into risk-assessment models is statistically and computationally challenging. We describe a statistical framework, the Bayesian generalized additive model ((BGAM), and present software for integrating multilayer high-dimensional inputs into risk-assessment models. We used BGAM and data from The Cancer Genome Atlas for the analysis and prediction of survival after diagnosis of breast cancer. We developed a sequence of studies to (1) compare predictions based on single omics with those based on clinical covariates commonly used for the assessment of breast cancer patients (COV), (2) evaluate the benefits of combining COV and omics, (3) compare models based on (a) COV and gene expression profiles from oncogenes with (b) COV and whole-genome gene expression (WGGE) profiles, and (4) evaluate the impacts of combining multiple omics and their interactions. We report that (1) WGGE profiles and whole-genome methylation (METH) profiles offer more predictive power than any of the COV commonly used in clinical practice (e.g., subtype and stage), (2) adding WGGE or METH profiles to COV increases prediction accuracy, (3) the predictive power of WGGE profiles is considerably higher than that based on expression from large-effect oncogenes, and (4) the gain in prediction accuracy when combining multiple omics is consistent. Our results show the feasibility of omic integration and highlight the importance of WGGE and METH profiles in breast cancer, achieving gains of up to 7 points area under the curve (AUC) over the COV in some cases. PMID:27129736

  15. Increased Proportion of Variance Explained and Prediction Accuracy of Survival of Breast Cancer Patients with Use of Whole-Genome Multiomic Profiles

    PubMed Central

    Vazquez, Ana I.; Veturi, Yogasudha; Behring, Michael; Shrestha, Sadeep; Kirst, Matias; Resende, Marcio F. R.; de los Campos, Gustavo

    2016-01-01

    Whole-genome multiomic profiles hold valuable information for the analysis and prediction of disease risk and progression. However, integrating high-dimensional multilayer omic data into risk-assessment models is statistically and computationally challenging. We describe a statistical framework, the Bayesian generalized additive model ((BGAM), and present software for integrating multilayer high-dimensional inputs into risk-assessment models. We used BGAM and data from The Cancer Genome Atlas for the analysis and prediction of survival after diagnosis of breast cancer. We developed a sequence of studies to (1) compare predictions based on single omics with those based on clinical covariates commonly used for the assessment of breast cancer patients (COV), (2) evaluate the benefits of combining COV and omics, (3) compare models based on (a) COV and gene expression profiles from oncogenes with (b) COV and whole-genome gene expression (WGGE) profiles, and (4) evaluate the impacts of combining multiple omics and their interactions. We report that (1) WGGE profiles and whole-genome methylation (METH) profiles offer more predictive power than any of the COV commonly used in clinical practice (e.g., subtype and stage), (2) adding WGGE or METH profiles to COV increases prediction accuracy, (3) the predictive power of WGGE profiles is considerably higher than that based on expression from large-effect oncogenes, and (4) the gain in prediction accuracy when combining multiple omics is consistent. Our results show the feasibility of omic integration and highlight the importance of WGGE and METH profiles in breast cancer, achieving gains of up to 7 points area under the curve (AUC) over the COV in some cases. PMID:27129736

  16. Accuracy and repeatability of Roentgen stereophotogrammetric analysis (RSA) for measuring knee laxity in longitudinal studies.

    PubMed

    Fleming, B C; Peura, G D; Abate, J A; Beynnon, B D

    2001-10-01

    Roentgen stereophotogrammetric analysis (RSA) can be used to assess temporal changes in anterior-posterior (A-P) knee laxity. However, the accuracy and precision of RSA is dependent on many factors and should be independently evaluated for a particular application. The objective of this study was to evaluate the use of RSA for measuring A-P knee laxity. The specific aims were to assess the variation or "noise" inherent to RSA, to determine the reproducibility of RSA for repeated A-P laxity testing, and to assess the accuracy of these measurements. Two experiments were performed. The first experiment utilized three rigid models of the tibiofemoral joint to assess the noise and to compare digitization errors of two independent examiners. No differences were found in the kinematic outputs of the RSA due to examiner, repeated trials, or the model used. In a second experiment, A-P laxity values between the A-P shear load limits of +/-60 N of five cadaver goat knees were measured to assess the error associated with repeated testing. The RSA laxity values were also compared to those obtained from a custom designed linkage system. The mean A-P laxity values with the knee 30 degrees, 60 degrees, and 90 degrees of flexion for the ACL-intact goat knee (+/-95% confidence interval) were 0.8 (+/-0.25), 0.9 (+/-0.29), and 0.4 (+/-0.22) mm, respectively. In the ACL-deficient knee, the A-P laxity values increased by an order of magnitude to 8.8 (+/-1.39), 7.6 (+/-1.32), and 3.1 (+/-1.20)mm, respectively. No significant differences were found between the A-P laxity values measured by RSA and the independent measurement technique. A highly significant linear relationship (r(2)=0.83) was also found between these techniques. This study suggests that the RSA method is an accurate and precise means to measure A-P knee laxity for repeated testing over time. PMID:11522316

  17. Assembly accuracy analysis for small components with a planar surface in large-scale metrology

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Huang, Peng; Li, Jiangxiong; Ke, Yinglin; Yang, Bingru; Maropoulos, Paul G.

    2016-04-01

    Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

  18. Analysis of machining accuracy during free form surface milling simulation for different milling strategies

    NASA Astrophysics Data System (ADS)

    Matras, A.; Kowalczyk, R.

    2014-11-01

    The analysis results of machining accuracy after the free form surface milling simulations (based on machining EN AW- 7075 alloys) for different machining strategies (Level Z, Radial, Square, Circular) are presented in the work. Particular milling simulations were performed using CAD/CAM Esprit software. The accuracy of obtained allowance is defined as a difference between the theoretical surface of work piece element (the surface designed in CAD software) and the machined surface after a milling simulation. The difference between two surfaces describes a value of roughness, which is as the result of tool shape mapping on the machined surface. Accuracy of the left allowance notifies in direct way a surface quality after the finish machining. Described methodology of usage CAD/CAM software can to let improve a time design of machining process for a free form surface milling by a 5-axis CNC milling machine with omitting to perform the item on a milling machine in order to measure the machining accuracy for the selected strategies and cutting data.

  19. Accuracy and repeatability of two methods of gait analysis - GaitRite™ und Mobility Lab™ - in subjects with cerebellar ataxia.

    PubMed

    Schmitz-Hübsch, Tanja; Brandt, Alexander U; Pfueller, Caspar; Zange, Leonora; Seidel, Adrian; Kühn, Andrea A; Paul, Friedemann; Minnerop, Martina; Doss, Sarah

    2016-07-01

    Instrumental gait analysis is increasingly recognized as a useful tool for the evaluation of movement disorders. The various assessment devices available to date have mostly been evaluated in healthy populations only. We aimed to explore whether reliability and validity seen in healthy subjects can also be assumed in subjects with cerebellar ataxic gait. Gait was recorded simultaneously with two devices - a sensor-embedded walkway and an inertial sensor based system - to explore test accuracy in two groups of subjects: one with mild to moderate cerebellar ataxia due to a subtype of autosomal-dominantly inherited neurodegenerative disorder (SCA14), the other were healthy subjects matched for age and height (CTR). Test precision was assessed by retest within session for each device. In conclusion, accuracy and repeatability of gait measurements were not compromised by ataxic gait disorder. The accuracy of spatial measures was speed-dependent and a direct comparison of stride length from both devices will be most reliably made at comfortable speed. Measures of stride variability had low agreement between methods in CTR and at retest in both groups. However, the marked increase of stride variability in ataxia outweighs the observed amount of imprecision. PMID:27289221

  20. Accuracy analysis of CryoSat-2 SARIn mode data over Antarctica

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Bamber, Jonathan; Cheng, Xiao

    2015-04-01

    In 2010, CryoSat-2 was launched, carrying a unique satellite radar altimetry (SRA) instrument called SAR/Interferometric Radar Altimeter (SIRAL), with the aim of measuring and monitoring sea ice, ice sheets and mountain glaciers. The novel SAR Interferometric mode (SARInM) of CryoSat-2 is designed to improve the accuracy, resolution and geolocation of height measurements over the steeper margins of ice sheets and ice caps. Over these areas, it employs the synthetic aperture radar (SAR) capability to reduce the size of the footprint to effectively 450m along track and ~1km across track implemented from an airborne prototype originally termed a delay-Doppler altimeter. Additionally, CryoSat-2 used the phase difference between its two antennas to estimate surface slope in the across-track direction and identify the point of closed approach directly. The phase difference is 2pi for a surface slope of approximately 1deg. If the slope is above this threshold, the tracked surface in the returned waveform may be not the point of closed approach causing an error in slope correction. For this reason, the analysis was limited to slopes of 1deg or less in this study. We used extensive coverage of Antarctica provided by the ICESat laser altimeter mission between 2003 and 2009 to assess the accuracy of SARInM data. We corrected for changes in elevations due to the interval between the acquisition of the ICESat and CryoSat-2 data (from July 2010 and December 2013). Two methods were used: (1) the ICESat point was compared with a DEM derived from CryoSat-2 data (Point-to-DEM; PtoDEM), and (2) the ICESat point was compared with a CryoSat-2 point directly (Point-to-Point; PtoP). For PtoDEM, CryoSat-2 elevations were interpolated onto a regular 1km polar stereographic grid with a standard parallel of 71°S, using ordinary kriging. For PtoP, the maximum distance between a CryoSat-2 point location and ICESat point location was set to 35m. For the areas with slopes less than 0.2deg, the

  1. Analysis of accuracy in optical motion capture - A protocol for laboratory setup evaluation.

    PubMed

    Eichelberger, Patric; Ferraro, Matteo; Minder, Ursina; Denton, Trevor; Blasimann, Angela; Krause, Fabian; Baur, Heiner

    2016-07-01

    Validity and reliability as scientific quality criteria have to be considered when using optical motion capture (OMC) for research purposes. Literature and standards recommend individual laboratory setup evaluation. However, system characteristics such as trueness, precision and uncertainty are often not addressed in scientific reports on 3D human movement analysis. One reason may be the lack of simple and practical methods for evaluating accuracy parameters of OMC. A protocol was developed for investigating the accuracy of an OMC system (Vicon, volume 5.5×1.2×2.0m(3)) with standard laboratory equipment and by means of trueness and uncertainty of marker distances. The study investigated the effects of number of cameras (6, 8 and 10), measurement height (foot, knee and hip) and movement condition (static and dynamic) on accuracy. Number of cameras, height and movement condition affected system accuracy significantly. For lower body assessment during level walking, the most favorable setting (10 cameras, foot region) revealed mean trueness and uncertainty to be -0.08 and 0.33mm, respectively. Dynamic accuracy cannot be predicted based on static error assessments. Dynamic procedures have to be used instead. The significant influence of the number of cameras and the measurement location suggests that instrumental errors should be evaluated in a laboratory- and task-specific manner. The use of standard laboratory equipment makes the proposed procedure widely applicable and it supports the setup process of OCM by simple functional error assessment. Careful system configuration and thorough measurement process control are needed to produce high-quality data. PMID:27230474

  2. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    NASA Astrophysics Data System (ADS)

    Grullón, Yraida R.; Alhaddad, Bahaaeddin; Cladera, Josep R.

    2009-09-01

    Corine land cover 2000 (CLC2000) is a project jointly managed by the Joint Research Centre (JRC) and the European Environment Agency (EEA). Its aim is to update the Corine land cover database in Europe for the year 2000. Landsat-7 Enhanced Thematic Mapper (ETM) satellite images were used for the update and were acquired within the framework of the Image2000 project. Knowledge of the land status through the use of mapping CORINE Land Cover is of great importance to study of interaction land cover and land use categories in Europe scale. This paper presents the accuracy assessment methodology designed and implemented to validate the Iberian Coast CORINE Land Cover 2000 cartography. It presents an implementation of a new methodological concept for land cover data production, Object- Based classification, and automatic generalization to assess the thematic accuracy of CLC2000 by means of an independent data source based on the comparison of the land cover database with reference data derived from visual interpretation of high resolution satellite imageries for sample areas. In our case study, the existing Object-Based classifications are supported with digital maps and attribute databases. According to the quality tests performed, we computed the overall accuracy, and Kappa Coefficient. We will focus on the development of a methodology based on classification and generalization analysis for built-up areas that may improve the investigation. This study can be divided in these fundamental steps: -Extract artificial areas from land use Classifications based on Land-sat and Spot images. -Manuel interpretation for high resolution of multispectral images. -Determine the homogeneity of artificial areas by generalization process. -Overall accuracy, Kappa Coefficient and Special grid (fishnet) test for quality test. Finally, this paper will concentrate to illustrate the precise accuracy of CORINE dataset based on the above general steps.

  3. Diagnostic Accuracy of Xpert Test in Tuberculosis Detection: A Systematic Review and Meta-analysis

    PubMed Central

    Kaur, Ravdeep; Kachroo, Kavita; Sharma, Jitendar Kumar; Vatturi, Satyanarayana Murthy; Dang, Amit

    2016-01-01

    Background: World Health Organization (WHO) recommends the use of Xpert MTB/RIF assay for rapid diagnosis of tuberculosis (TB) and detection of rifampicin resistance. This systematic review was done to know about the diagnostic accuracy and cost-effectiveness of the Xpert MTB/RIF assay. Methods: A systematic literature search was conducted in following databases: Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews, MEDLINE, PUBMED, Scopus, Science Direct and Google Scholar for relevant studies for studies published between 2010 and December 2014. Studies given in the systematic reviews were accessed separately and used for analysis. Selection of studies, data extraction and assessment of quality of included studies was performed independently by two reviewers. Studies evaluating the diagnostic accuracy of Xpert MTB/RIF assay among adult or predominantly adult patients (≥14 years), presumed to have pulmonary TB with or without HIV infection were included in the review. Also, studies that had assessed the diagnostic accuracy of Xpert MTB/RIF assay using sputum and other respiratory specimens were included. Results: The included studies had a low risk of any form of bias, showing that findings are of high scientific validity and credibility. Quantitative analysis of 37 included studies shows that Xpert MTB/RIF is an accurate diagnostic test for TB and detection of rifampicin resistance. Conclusion: Xpert MTB/RIF assay is a robust, sensitive and specific test for accurate diagnosis of tuberculosis as compared to conventional tests like culture and microscopic examination. PMID:27013842

  4. Accuracy and reproducibility of bending stiffness measurements by mechanical response tissue analysis in artificial human ulnas.

    PubMed

    Arnold, Patricia A; Ellerbrock, Emily R; Bowman, Lyn; Loucks, Anne B

    2014-11-01

    Osteoporosis is characterized by reduced bone strength, but no FDA-approved medical device measures bone strength. Bone strength is strongly associated with bone stiffness, but no FDA-approved medical device measures bone stiffness either. Mechanical Response Tissue Analysis (MRTA) is a non-significant risk, non-invasive, radiation-free, vibration analysis technique for making immediate, direct functional measurements of the bending stiffness of long bones in humans in vivo. MRTA has been used for research purposes for more than 20 years, but little has been published about its accuracy. To begin to investigate its accuracy, we compared MRTA measurements of bending stiffness in 39 artificial human ulna bones to measurements made by Quasistatic Mechanical Testing (QMT). In the process, we also quantified the reproducibility (i.e., precision and repeatability) of both methods. MRTA precision (1.0±1.0%) and repeatability (3.1 ± 3.1%) were not as high as those of QMT (0.2 ± 0.2% and 1.3+1.7%, respectively; both p<10(-4)). The relationship between MRTA and QMT measurements of ulna bending stiffness was indistinguishable from the identity line (p=0.44) and paired measurements by the two methods agreed within a 95% confidence interval of ± 5%. If such accuracy can be achieved on real human ulnas in situ, and if the ulna is representative of the appendicular skeleton, MRTA may prove clinically useful. PMID:25261885

  5. Menu label accuracy at a university's foodservices. An exploratory recipe nutrition analysis.

    PubMed

    Feldman, Charles; Murray, Douglas; Chavarria, Stephanie; Zhao, Hang

    2015-09-01

    The increase in the weight of American adults and children has been positively associated with the prevalence of the consumption of food-away-from-home. The objective was to assess the accuracy of claimed nutritional information of foods purchased in contracted foodservices located on the campus of an institution of higher education. Fifty popular food items were randomly collected from five main dining outlets located on a selected campus in the northeastern United States. The sampling was repeated three times on separate occasions for an aggregate total of 150 food samples. The samples were then weighed and assessed for nutrient composition (protein, cholesterol, fiber, carbohydrates, total fat, calories, sugar, and sodium) using nutrient analysis software. Results were compared with foodservices' published nutrition information. Two group comparisons, claimed and measured, were performed using the paired-sample t-test. Descriptive statistics were used as well. Among the nine nutritional values, six nutrients (total fat, sodium, protein, fiber, cholesterol, and weight) had more than 10% positive average discrepancies between measured and claimed values. Statistical significance of the variance was obtained in four of the eight categories of nutrient content: total fat, sodium, protein, and cholesterol (P < .05). Significance was also reached in the variance of actual portion weight compared to the published claims (P < .001). Significant differences of portion size (weight), total fat, sodium, protein, and cholesterol were found among the sampled values and the foodservices' published claims. The findings from this study raise the concern that if the actual nutritional information does not accurately reflect the declared values on menus, conclusions, decisions and actions based on posted information may not be valid. PMID:25958116

  6. Increasing the Accuracy in the Measurement of the Minor Isotopes of Uranium: Care in Selection of Reference Materials, Baselines and Detector Calibration

    NASA Astrophysics Data System (ADS)

    Poths, J.; Koepf, A.; Boulyga, S. F.

    2008-12-01

    The minor isotopes of uranium (U-233, U-234, U-236) are increasingly useful for tracing a variety of processes: movement of anthropogenic nuclides in the environment (ref 1), sources of uranium ores (ref 2), and nuclear material attribution (ref 3). We report on improved accuracy for U-234/238 and U-236/238 by supplementing total evaporation protocol TIMS measurement on Faraday detectors (ref 4)with multiplier measurement for the minor isotopes. Measurement of small signals on Faraday detectors alone is limited by noise floors of the amplifiers and accurate measurement of the baseline offsets. The combined detector approach improves the reproducibility to better than ±1% (relative) for the U-234/238 at natural abundance, and yields a detection limit for U-236/U-238 of <0.2 ppm. We have quantified contribution of different factors to the uncertainties associated with these peak jumping measurement on a single detector, with an aim of further improvement. The uncertainties in the certified values for U-234 and U-236 in the uranium standard NBS U005, if used for mass bias correction, dominates the uncertainty in their isotopic ratio measurements. Software limitations in baseline measurement drives the detection limit for the U-236/U-238 ratio. This is a topic for discussion with the instrument manufacturers. Finally, deviation from linearity of the response of the electron multiplier with count rate limits the accuracy and reproducibility of these minor isotope measurements. References: (1) P. Steier et al(2008) Nuc Inst Meth(B), 266, 2246-2250. (2) E. Keegan et al (2008) Appl Geochem 23, 765-777. (3) K. Mayer et al (1998) IAEA-CN-98/11, in Advances in Destructive and Non-destructive Analysis for Environmental Monitoring and Nuclear Forensics. (4) S. Richter and S. Goldberg(2003) Int J Mass Spectrom, 229, 181-197.

  7. Shortening the retention interval of 24-hour dietary recalls increases fourth-grade children’s accuracy for reporting energy and macronutrient intake at school meals

    PubMed Central

    Guinn, Caroline H.; Royer, Julie A.; Hardin, James W.; Mackelprang, Alyssa J.; Smith, Albert F.

    2010-01-01

    Background Accurate information about children’s intake is crucial for national nutrition policy and for research and clinical activities. To analyze accuracy for reporting energy and nutrients, most validation studies utilize the conventional approach which was not designed to capture errors of reported foods and amounts. The reporting-error-sensitive approach captures errors of reported foods and amounts. Objective To extend results to energy and macronutrients for a validation study concerning retention interval (elapsed time between to-be-reported meals and the interview) and accuracy for reporting school-meal intake, the conventional and reporting-error-sensitive approaches were compared. Design and participants/setting Fourth-grade children (n=374) were observed eating two school meals, and interviewed to obtain a 24-hour recall using one of six interview conditions from crossing two target periods (prior-24-hours; previous-day) with three interview times (morning; afternoon; evening). Data were collected in one district during three school years (2004–2005; 2005–2006; 2006–2007). Main outcome measures Report rates (reported/observed), correspondence rates (correctly reported/observed), and inflation ratios (intruded/observed) were calculated for energy and macronutrients. Statistical analyses performed For each outcome measure, mixed-model analysis of variance was conducted with target period, interview time, their interaction, and sex in the model; results were adjusted for school year and interviewer. Results Conventional approach — Report rates for energy and macronutrients did not differ by target period, interview time, their interaction, or sex. Reporting-error-sensitive approach — Correspondence rates for energy and macronutrients differed by target period (four P-values<0.0001) and the target-period by interview-time interaction (four P-values<0.0001); inflation ratios for energy and macronutrients differed by target period (four P

  8. The Accuracy of Diagnostic Methods for Diabetic Retinopathy: A Systematic Review and Meta-Analysis

    PubMed Central

    Martínez-Vizcaíno, Vicente; Cavero-Redondo, Iván; Álvarez-Bueno, Celia; Rodríguez-Artalejo, Fernando

    2016-01-01

    Objective The objective of this study was to evaluate the accuracy of the recommended glycemic measures for diagnosing diabetic retinopathy. Methods We systematically searched MEDLINE, EMBASE, the Cochrane Library, and the Web of Science databases from inception to July 2015 for observational studies comparing the diagnostic accuracy of glycated hemoglobin (HbA1c), fasting plasma glucose (FPG), and 2-hour plasma glucose (2h-PG). Random effects models for the diagnostic odds ratio (dOR) value computed by Moses’ constant for a linear model and 95% CIs were used to calculate the accuracy of the test. Hierarchical summary receiver operating characteristic curves (HSROC) were used to summarize the overall test performance. Results Eleven published studies were included in the meta-analysis. The pooled dOR values for the diagnosis of retinopathy were 16.32 (95% CI 13.86–19.22) for HbA1c and 4.87 (95% CI 4.39–5.40) for FPG. The area under the HSROC was 0.837 (95% CI 0.781–0.892) for HbA1c and 0.735 (95% CI 0.657–0.813) for FPG. The 95% confidence region for the point that summarizes the overall test performance of the included studies occurs where the cut-offs ranged from 6.1% (43.2 mmol/mol) to 7.8% (61.7 mmol/mol) for HbA1c and from 7.8 to 9.3 mmol/L for FPG. In the four studies that provided information regarding 2h-PG, the pooled accuracy estimates for HbA1c were similar to those of 2h-PG; the overall performance for HbA1c was superior to that for FPG. Conclusions The three recommended tests for the diagnosis of type 2 diabetes in nonpregnant adults showed sufficient accuracy for their use in clinical settings, although the overall accuracy for the diagnosis of retinopathy was similar for HbA1c and 2h-PG, which were both more accurate than for FPG. Due to the variability and inconveniences of the glucose level-based methods, HbA1c appears to be the most appropriate method for the diagnosis diabetic retinopathy. PMID:27123641

  9. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  10. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  11. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  12. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  13. Accuracy Analysis and Validation of the Mars Science Laboratory (MSL) Robotic Arm

    NASA Technical Reports Server (NTRS)

    Collins, Curtis L.; Robinson, Matthew L.

    2013-01-01

    The Mars Science Laboratory (MSL) Curiosity Rover is currently exploring the surface of Mars with a suite of tools and instruments mounted to the end of a five degree-of-freedom robotic arm. To verify and meet a set of end-to-end system level accuracy requirements, a detailed positioning uncertainty model of the arm was developed and exercised over the arm operational workspace. Error sources at each link in the arm kinematic chain were estimated and their effects propagated to the tool frames.A rigorous test and measurement program was developed and implemented to collect data to characterize and calibrate the kinematic and stiffness parameters of the arm. Numerous absolute and relative accuracy and repeatability requirements were validated with a combination of analysis and test data extrapolated to the Mars gravity and thermal environment. Initial results of arm accuracy and repeatability on Mars demonstrate the effectiveness of the modeling and test program as the rover continues to explore the foothills of Mount Sharp.

  14. Accuracy of surface tension measurement from drop shapes: the role of image analysis.

    PubMed

    Kalantarian, Ali; Saad, Sameh M I; Neumann, A Wilhelm

    2013-11-01

    Axisymmetric Drop Shape Analysis (ADSA) has been extensively used for surface tension measurement. In essence, ADSA works by matching a theoretical profile of the drop to the extracted experimental profile, taking surface tension as an adjustable parameter. Of the three main building blocks of ADSA, i.e. edge detection, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure, only edge detection (that extracts the drop profile line from the drop image) needs extensive study. For the purpose of this article, the numerical integration of the Laplace equation for generating theoretical curves and the optimization procedure will only require a minor effort. It is the aim of this paper to investigate how far the surface tension accuracy of drop shape techniques can be pushed by fine tuning and optimizing edge detection strategies for a given drop image. Two different aspects of edge detection are pursued here: sub-pixel resolution and pixel resolution. The effect of two sub-pixel resolution strategies, i.e. spline and sigmoid, on the accuracy of surface tension measurement is investigated. It is found that the number of pixel points in the fitting procedure of the sub-pixel resolution techniques is crucial, and its value should be determined based on the contrast of the image, i.e. the gray level difference between the drop and the background. On the pixel resolution side, two suitable and reliable edge detectors, i.e. Canny and SUSAN, are explored, and the effect of user-specified parameters of the edge detector on the accuracy of surface tension measurement is scrutinized. Based on the contrast of the image, an optimum value of the user-specified parameter of the edge detector, SUSAN, is suggested. Overall, an accuracy of 0.01mJ/m(2) is achievable for the surface tension determination by careful fine tuning of edge detection algorithms. PMID:24018120

  15. Design and accuracy analysis of a metamorphic CNC flame cutting machine for ship manufacturing

    NASA Astrophysics Data System (ADS)

    Hu, Shenghai; Zhang, Manhui; Zhang, Baoping; Chen, Xi; Yu, Wei

    2016-05-01

    The current research of processing large size fabrication holes on complex spatial curved surface mainly focuses on the CNC flame cutting machines design for ship hull of ship manufacturing. However, the existing machines cannot meet the continuous cutting requirements with variable pass conditions through their fixed configuration, and cannot realize high-precision processing as the accuracy theory is not studied adequately. This paper deals with structure design and accuracy prediction technology of novel machine tools for solving the problem of continuous and high-precision cutting. The needed variable trajectory and variable pose kinematic characteristics of non-contact cutting tool are figured out and a metamorphic CNC flame cutting machine designed through metamorphic principle is presented. To analyze kinematic accuracy of the machine, models of joint clearances, manufacturing tolerances and errors in the input variables and error models considering the combined effects are derived based on screw theory after establishing ideal kinematic models. Numerical simulations, processing experiment and trajectory tracking experiment are conducted relative to an eccentric hole with bevels on cylindrical surface respectively. The results of cutting pass contour and kinematic error interval which the position error is from-0.975 mm to +0.628 mm and orientation error is from-0.01 rad to +0.01 rad indicate that the developed machine can complete cutting process continuously and effectively, and the established kinematic error models are effective although the interval is within a `large' range. It also shows the matching property between metamorphic principle and variable working tasks, and the mapping correlation between original designing parameters and kinematic errors of machines. This research develops a metamorphic CNC flame cutting machine and establishes kinematic error models for accuracy analysis of machine tools.

  16. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  17. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  18. Spot detection accuracy analysis in turbulent channel for free space optical communication

    NASA Astrophysics Data System (ADS)

    Liu, Yan-Fei; Dai, Yong-Hong; Yu, Sheng-Lin; Xin, Shan; Chen, Jing; Ai, Yong

    2015-10-01

    Increasingly importance has been taken seriously for high frame rate CMOS camera to optical communication acquisition pointing and tacking (APT) system, with its compact structure, easy to developed and adapted to beacon light spot detection in atmospheric channel. As spot position accuracy directly determines the performance of space optical communication, it is very important to design a high precision spot center algorithm. Usually spot location algorithm uses gravity algorithm, shape center capturing algorithm or self-adaption threshold algorithm. In experiments we analyzed the characteristics of the spots which transmitted through atmospheric turbulence and studied light transmission characteristics in turbulent channel. We carried out a beacon light detection experiments in a distance of 3.4km, collected the beacon spots on CMOS camera and signal light power. We calculated spot position with two different algorithm and compared the calculation accuracy between field dispersive spot and ideal Gaussian laser spot. Experiment research show that, gravity center algorithm should be more suitable for beacon beam spot which accuracy can be improved about 1.3 pixels for a Gaussian spot. But the shape center algorithm has higher precision. The reasons were analyzed which made an important preparation for subsequent testing.

  19. Accuracy of finite-element models for the stress analysis of multiple-holed moderator blocks

    SciTech Connect

    Smith, P.D.; Sullivan, R.M.; Lewis, A.C.; Yu, H.J.

    1981-01-01

    Two steps have been taken to quantify and improve the accuracy in the analysis. First, the limitations of various approximation techniques have been studied with the aid of smaller benchmark problems containing fewer holes. Second, a new family of computer programs has been developed for handling such large problems. This paper describes the accuracy studies and the benchmark problems. A review is given of some proposed modeling techniques including local mesh refinement, homogenization, a special-purpose finite element, and substructuring. Some limitations of these approaches are discussed. The new finite element programs and the features that contribute to their efficiency are discussed. These include a standard architecture for out-of-core data processing and an equation solver that operates on a peripheral array processor. The central conclusions of the paper are: (1) modeling approximation methods such as local mesh refinement and homogenization tend to be unreliable, and they should be justified by a fine mesh benchmark analysis; and (2) finite element codes are now available that can achieve accurate solutions at a reasonable cost, and there is no longer a need to employ modeling approximations in the two-dimensional analysis of HTGR fuel elements. 10 figures.

  20. Methodology issues concerning the accuracy of kinematic data collection and analysis using the ariel performance analysis system

    NASA Technical Reports Server (NTRS)

    Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)

    1992-01-01

    Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.

  1. The Accuracy of the Swallowing Kinematic Analysis at Various Movement Velocities of the Hyoid and Epiglottis

    PubMed Central

    Lee, Seung Hak; Chun, Seong Min; Lee, Jung Chan; Min, Yusun; Bang, Sang-Heum; Kim, Hee Chan; Han, Tai Ryoon

    2013-01-01

    Objective To evaluate the accuracy of the swallowing kinematic analysis. Methods To evaluate the accuracy at various velocities of movement, we developed an instrumental model of linear and rotational movement, representing the physiologic movement of the hyoid and epiglottis, respectively. A still image of 8 objects was also used for measuring the length of the objects as a basic screening, and 18 movie files of the instrumental model, taken from videofluoroscopy with different velocities. The images and movie files were digitized and analyzed by an experienced examiner, who was blinded to the study. Results The Pearson correlation coefficients between the measured and instrumental reference values were over 0.99 (p<0.001) for all of the analyses. Bland-Altman plots showed narrow ranges of the 95% confidence interval of agreement between the measured and reference values as follows: 0.14 to 0.94 mm for distances in a still image, -0.14 to 1.09 mm/s for linear velocities, and -1.02 to 3.81 degree/s for angular velocities. Conclusion Our findings demonstrate that the distance and velocity measurements obtained by swallowing kinematic analysis are highly valid in a wide range of movement velocity. PMID:23869329

  2. Combined Scintigraphy and Tumor Marker Analysis Predicts Unfavorable Histopathology of Neuroblastic Tumors with High Accuracy

    PubMed Central

    Fendler, Wolfgang Peter; Wenter, Vera; Thornton, Henriette Ingrid; Ilhan, Harun; von Schweinitz, Dietrich; Coppenrath, Eva; Schmid, Irene; Bartenstein, Peter; Pfluger, Thomas

    2015-01-01

    Objectives Our aim was to improve the prediction of unfavorable histopathology (UH) in neuroblastic tumors through combined imaging and biochemical parameters. Methods 123I-MIBG SPECT and MRI was performed before surgical resection or biopsy in 47 consecutive pediatric patients with neuroblastic tumor. Semi-quantitative tumor-to-liver count-rate ratio (TLCRR), MRI tumor size and margins, urine catecholamine and NSE blood levels of neuron specific enolase (NSE) were recorded. Accuracy of single and combined variables for prediction of UH was tested by ROC analysis with Bonferroni correction. Results 34 of 47 patients had UH based on the International Neuroblastoma Pathology Classification (INPC). TLCRR and serum NSE both predicted UH with moderate accuracy. Optimal cut-off for TLCRR was 2.0, resulting in 68% sensitivity and 100% specificity (AUC-ROC 0.86, p < 0.001). Optimal cut-off for NSE was 25.8 ng/ml, resulting in 74% sensitivity and 85% specificity (AUC-ROC 0.81, p = 0.001). Combination of TLCRR/NSE criteria reduced false negative findings from 11/9 to only five, with improved sensitivity and specificity of 85% (AUC-ROC 0.85, p < 0.001). Conclusion Strong 123I-MIBG uptake and high serum level of NSE were each predictive of UH. Combined analysis of both parameters improved the prediction of UH in patients with neuroblastic tumor. MRI parameters and urine catecholamine levels did not predict UH. PMID:26177109

  3. Accuracy evaluation of a new stereophotogrammetry-based functional method for joint kinematic analysis in biomechanics.

    PubMed

    Galetto, Maurizio; Gastaldi, Laura; Lisco, Giulia; Mastrogiacomo, Luca; Pastorelli, Stefano

    2014-11-01

    The human joint kinematics is an interesting topic in biomechanics and turns to be useful for the analysis of human movement in several fields. A crucial issue regards the assessment of joint parameters, like axes and centers of rotation, due to the direct influence on human motion patterns. A proper accuracy in the estimation of these parameters is hence required. On the whole, stereophotogrammetry-based predictive methods and, as an alternative, functional ones can be used to this end. This article presents a new functional algorithm for the assessment of knee joint parameters, based on a polycentric hinge model for the knee flexion-extension. The proposed algorithm is discussed, identifying its fields of application and its limits. The techniques for estimating the joint parameters from the metrological point of view are analyzed, so as to lay the groundwork for enhancing and eventually replacing predictive methods, currently used in the laboratories of human movement analysis. This article also presents an assessment of the accuracy associated with the whole process of measurement and joint parameters estimation. To this end, the presented functional method is tested through both computer simulations and a series of experimental laboratory tests in which swing motions were imposed to a polycentric mechanical analogue and a stereophotogrammetric system was used to record them. PMID:25500863

  4. An analysis of the accuracy of magnetic resonance flip angle measurement methods

    NASA Astrophysics Data System (ADS)

    Morrell, Glen R.; Schabel, Matthias C.

    2010-10-01

    Several methods of flip angle mapping for magnetic resonance imaging have been proposed. We evaluated the accuracy of five methods of flip angle measurement in the presence of measurement noise. Our analysis was performed in a closed form by propagation of probability density functions (PDFs). The flip angle mapping methods compared were (1) the phase-sensitive method, (2) the dual-angle method using gradient recalled echoes (GRE), (3) an extended version of the GRE dual-angle method incorporating phase information, (4) the AFI method and (5) an extended version of the AFI method incorporating phase information. Our analysis took into account differences in required imaging time for these methods in the comparison of noise efficiency. PDFs of the flip angle estimate for each method for each value of true flip angle were calculated. These PDFs completely characterize the performance of each method. Mean bias and standard deviation were computed from these PDFs to more simply quantify the relative accuracy of each method over its range of measurable flip angles. We demonstrate that the phase-sensitive method provides the lowest mean bias and standard deviation of flip angle estimate of the five methods evaluated over a wide range of flip angles.

  5. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  6. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  7. Integrated CFD and Controls Analysis Interface for High Accuracy Liquid Propellant Slosh Predictions

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and the control system of a launch vehicle. Instead of relying on mechanical analogs which are n0t va lid during all stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid now equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  8. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins

    PubMed Central

    Afanasyev, Vsevolod; Buldyrev, Sergey V.; Dunn, Michael J.; Robst, Jeremy; Preston, Mark; Bremner, Steve F.; Briggs, Dirk R.; Brown, Ruth; Adlard, Stacey; Peat, Helen J.

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge’s accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  9. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    PubMed

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems. PMID:25894763

  10. Repetition, not number of sources, increases both susceptibility to misinformation and confidence in the accuracy of eyewitnesses.

    PubMed

    Foster, Jeffrey L; Huthwaite, Thomas; Yesberg, Julia A; Garry, Maryanne; Loftus, Elizabeth F

    2012-02-01

    Are claims more credible when made by multiple sources, or is it the repetition of claims that matters? Some research suggests that claims have more credibility when independent sources make them. Yet, other research suggests that simply repeating information makes it more accessible and encourages reliance on automatic processes-factors known to change people's judgments. In Experiment 1, people took part in a "misinformation" study: people first watched a video of a crime and later read eyewitness reports attributed to one or three different eyewitnesses who made misleading claims in either one report or repeated the same misleading claims across all three reports. In Experiment 2, people who had not seen any videos read those same reports and indicated how confident they were that each claim happened in the original event. People were more misled by-and more confident about-claims that were repeated, regardless of how many eyewitnesses made them. We hypothesize that people interpreted the familiarity of repeated claims as markers of accuracy. These findings fit with research showing that repeating information makes it seem more true, and highlight the power of a single repeated voice. PMID:22257711

  11. Zagreb Amblyopia Preschool Screening Study: near and distance visual acuity testing increase the diagnostic accuracy of screening for amblyopia

    PubMed Central

    Bušić, Mladen; Bjeloš, Mirjana; Petrovečki, Mladen; Kuzmanović Elabjer, Biljana; Bosnar, Damir; Ramić, Senad; Miletić, Daliborka; Andrijašević, Lidija; Kondža Krstonijević, Edita; Jakovljević, Vid; Bišćan Tvrdi, Ana; Predović, Jurica; Kokot, Antonio; Bišćan, Filip; Kovačević Ljubić, Mirna; Motušić Aras, Ranka

    2016-01-01

    Aim To present and evaluate a new screening protocol for amblyopia in preschool children. Methods Zagreb Amblyopia Preschool Screening (ZAPS) study protocol performed screening for amblyopia by near and distance visual acuity (VA) testing of 15 648 children aged 48-54 months attending kindergartens in the City of Zagreb County between September 2011 and June 2014 using Lea Symbols in lines test. If VA in either eye was >0.1 logMAR, the child was re-tested, if failed at re-test, the child was referred to comprehensive eye examination at the Eye Clinic. Results 78.04% of children passed the screening test. Estimated prevalence of amblyopia was 8.08%. Testability, sensitivity, and specificity of the ZAPS study protocol were 99.19%, 100.00%, and 96.68% respectively. Conclusion The ZAPS study used the most discriminative VA test with optotypes in lines as they do not underestimate amblyopia. The estimated prevalence of amblyopia was considerably higher than reported elsewhere. To the best of our knowledge, the ZAPS study protocol reached the highest sensitivity and specificity when evaluating diagnostic accuracy of VA tests for screening. The pass level defined at ≤0.1 logMAR for 4-year-old children, using Lea Symbols in lines missed no amblyopia cases, advocating that both near and distance VA testing should be performed when screening for amblyopia. PMID:26935612

  12. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  13. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  14. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  15. A unification of models for meta-analysis of diagnostic accuracy studies without a gold standard.

    PubMed

    Liu, Yulun; Chen, Yong; Chu, Haitao

    2015-06-01

    Several statistical methods for meta-analysis of diagnostic accuracy studies have been discussed in the presence of a gold standard. However, in practice, the selected reference test may be imperfect due to measurement error, non-existence, invasive nature, or expensive cost of a gold standard. It has been suggested that treating an imperfect reference test as a gold standard can lead to substantial bias in the estimation of diagnostic test accuracy. Recently, two models have been proposed to account for imperfect reference test, namely, a multivariate generalized linear mixed model (MGLMM) and a hierarchical summary receiver operating characteristic (HSROC) model. Both models are very flexible in accounting for heterogeneity in accuracies of tests across studies as well as the dependence between tests. In this article, we show that these two models, although with different formulations, are closely related and are equivalent in the absence of study-level covariates. Furthermore, we provide the exact relations between the parameters of these two models and assumptions under which two models can be reduced to equivalent submodels. On the other hand, we show that some submodels of the MGLMM do not have corresponding equivalent submodels of the HSROC model, and vice versa. With three real examples, we illustrate the cases when fitting the MGLMM and HSROC models leads to equivalent submodels and hence identical inference, and the cases when the inferences from two models are slightly different. Our results generalize the important relations between the bivariate generalized linear mixed model and HSROC model when the reference test is a gold standard. PMID:25358907

  16. TP53 Mutational Analysis Enhances the Prognostic Accuracy of IHC4 and PAM50 Assays

    PubMed Central

    Lin, Ching-Hung; Chen, I-Chiun; Huang, Chiun-Sheng; Hu, Fu-Chang; Kuo, Wen-Hung; Kuo, Kuan-Ting; Wang, Chung-Chieh; Wu, Pei-Fang; Chang, Dwan-Ying; Wang, Ming-Yang; Chang, Chin-Hao; Chen, Wei-Wu; Lu, Yen-Shen; Cheng, Ann-Lii

    2015-01-01

    IHC4 and PAM50 assays have been shown to provide additional prognostic information for patients with early breast cancer. We evaluated whether incorporating TP53 mutation analysis can further enhance their prognostic accuracy. We examined TP53 mutation and the IHC4 score in tumors of 605 patients diagnosed with stage I–III breast cancer at National Taiwan University Hospital (the NTUH cohort). We obtained information regarding TP53 mutation and PAM50 subtypes in 699 tumors from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) cohort. We found that TP53 mutation was significantly associated with high-risk IHC4 group and with luminal B, HER2-enriched, and basal-like subtypes. Despite the strong associations, TP53 mutation independently predicted shorter relapse-free survival (hazard ratio [HR] = 1.63, P = 0.007) in the NTUH cohort and shorter breast cancer-specific survival (HR = 2.35, P = <0.001) in the METABRIC cohort. TP53 mutational analysis added significant prognostic information in addition to the IHC4 score (∆ LR-χ2 = 8.61, P = 0.002) in the NTUH cohort and the PAM50 subtypes (∆ LR-χ2 = 18.9, P = <0.001) in the METABRIC cohort. We conclude that incorporating TP53 mutation analysis can enhance the prognostic accuracy of the IHC4 and PAM50 assays. PMID:26671300

  17. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction. PMID:16521625

  18. Accuracy and Repeatability of the Gait Analysis by the WalkinSense System

    PubMed Central

    de Castro, Marcelo P.; Soares, Denise P.; Borgonovo-Santos, Márcio; Sousa, Filipa; Vilas-Boas, João Paulo

    2014-01-01

    WalkinSense is a new device designed to monitor walking. The aim of this study was to measure the accuracy and repeatability of the gait analysis performed by the WalkinSense system. Descriptions of values recorded by WalkinSense depicting typical gait in adults are also presented. A bench experiment using the Trublu calibration device was conducted to statically test the WalkinSense. Following this, a dynamic test was carried out overlapping the WalkinSense and the Pedar insoles in 40 healthy participants during walking. Pressure peak, pressure peak time, pressure-time integral, and mean pressure at eight-foot regions were calculated. In the bench experiments, the repeatability (i) among the WalkinSense sensors (within), (ii) between two WalkinSense devices, and (iii) between the WalkinSense and the Trublu devices was excellent. In the dynamic tests, the repeatability of the WalkinSense (i) between stances in the same trial (within-trial) and (ii) between trials was also excellent (ICC > 0.90). When the eight-foot regions were analyzed separately, the within-trial and between-trials repeatability was good-to-excellent in 88% (ICC > 0.80) of the data and fair in 11%. In short, the data suggest that the WalkinSense has good-to-excellent levels of accuracy and repeatability for plantar pressure variables. PMID:24701570

  19. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis

    PubMed Central

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-01-01

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling’s proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77–172, I2 = 12.0%) and the AUC was 0.970 (95%CI: 0.958–0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871–0.940) and 0.912 (95%CI: 0.892–0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4–12.7) and 0.098 (95%CI 0.066–0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI. PMID:27418431

  20. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  1. Diagnostic test accuracy of glutamate dehydrogenase for Clostridium difficile: Systematic review and meta-analysis.

    PubMed

    Arimoto, Jun; Horita, Nobuyuki; Kato, Shingo; Fuyuki, Akiko; Higurashi, Takuma; Ohkubo, Hidenori; Endo, Hiroki; Takashi, Nonaka; Kaneko, Takeshi; Nakajima, Atsushi

    2016-01-01

    We performed this systematic review and meta-analysis to assess the diagnostic accuracy of detecting glutamate dehydrogenase (GDH) for Clostridium difficile infection (CDI) based on the hierarchical model. Two investigators electrically searched four databases. Reference tests were stool cell cytotoxicity neutralization assay (CCNA) and stool toxigenic culture (TC). To assess the overall accuracy, we calculated the diagnostic odds ratio (DOR) using a DerSimonian-Laird random-model and area the under hierarchical summary receiver operating characteristics (AUC) using Holling's proportional hazard models. The summary estimate of the sensitivity and the specificity were obtained using the bivariate model. According to 42 reports consisting of 3055 reference positive comparisons, and 26188 reference negative comparisons, the DOR was 115 (95%CI: 77-172, I(2) = 12.0%) and the AUC was 0.970 (95%CI: 0.958-0.982). The summary estimate of sensitivity and specificity were 0.911 (95%CI: 0.871-0.940) and 0.912 (95%CI: 0.892-0.928). The positive and negative likelihood ratios were 10.4 (95%CI 8.4-12.7) and 0.098 (95%CI 0.066-0.142), respectively. Detecting GDH for the diagnosis of CDI had both high sensitivity and specificity. Considering its low cost and prevalence, it is appropriate for a screening test for CDI. PMID:27418431

  2. Diagnostic Accuracy of Noncontrast CT in Detecting Acute Appendicitis: A Meta-analysis of Prospective Studies.

    PubMed

    Xiong, Bing; Zhong, Baishu; Li, Zhenwei; Zhou, Feng; Hu, Ruying; Feng, Zhan; Xu, Shunliang; Chen, Feng

    2015-06-01

    The aim of the study is to evaluate the diagnostic accuracy of noncontrast CT in detecting acute appendicitis. Prospective studies in which noncontrast CT was performed to evaluate acute appendicitis were found on PubMed, EMBASE, and Cochrane Library. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were assessed. The summary receiver-operating characteristic curve was conducted and the area under the curve was calculated. Seven original studies investigating a total of 845 patients were included in this meta-analysis. The pooled sensitivity and specificity were 0.90 (95% CI: 0.86-0.92) and 0.94 (95% CI: 0.92-0.97), respectively. The pooled positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio was 12.90 (95% CI: 4.80-34.67), 0.09 (95% CI: 0.04-0.20), and 162.76 (95% CI: 31.05-853.26), respectively. The summary receiver-operating characteristic curve was symmetrical and the area under the curve was 0.97 (95% CI: 0.95-0.99). In conclusion, noncontrast CT has high diagnostic accuracy in detecting acute appendicitis, which is adequate for clinical decision making. PMID:26031278

  3. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction. PMID:16939159

  4. Accuracy enhancement of GPS time series using principal component analysis and block spatial filtering

    NASA Astrophysics Data System (ADS)

    He, Xiaoxing; Hua, Xianghong; Yu, Kegen; Xuan, Wei; Lu, Tieding; Zhang, W.; Chen, X.

    2015-03-01

    This paper focuses on performance analysis and accuracy enhancement of long-term position time series of a regional network of GPS stations with two near sub-blocks, one block of 8 stations in Cascadia region and another block of 14 stations in Southern California. We have analyzed the seasonal variations of the 22 IGS site positions between 2004 and 2011. The Green's function is used to calculate the station-site displacements induced by the environmental loading due to atmospheric pressure, soil moisture, snow depth and nontidal ocean. The analysis has revealed that these loading factors can result in position shift of centimeter level, the displacement time series exhibit a periodic pattern, which can explain about 12.70-21.78% of the seasonal amplitude on vertical GPS time series, and the loading effect is significantly different among the two nearby geographical regions. After the loading effect is corrected, the principal component analysis (PCA)-based block spatial filtering is proposed to filter out the remaining common mode error (CME) of the GPS time series. The results show that the PCA-based block spatial filtering can extract the CME more accurately and effectively than the conventional overall filtering method, reducing more of the uncertainty. With the loading correction and block spatial filtering, about 68.34-73.20% of the vertical GPS seasonal power can be separated and removed, improving the reliability of the GPS time series and hence enabling better deformation analysis and higher precision geodetic applications.

  5. modern global models of the earth's gravity field: analysis of their accuracy and resolution

    NASA Astrophysics Data System (ADS)

    Ganagina, Irina; Karpik, Alexander; Kanushin, Vadim; Goldobin, Denis; Kosareva, Alexandra; Kosarev, Nikolay; Mazurova, Elena

    2015-04-01

    Introduction: Accurate knowledge of the fine structure of the Earth's gravity field extends opportunities in geodynamic problem-solving and high-precision navigation. In the course of our investigations have been analyzed the resolution and accuracy of 33 modern global models of the Earth's gravity field and among them 23 combined models and 10 satellite models obtained by the results of GOCE, GRACE, and CHAMP satellite gravity mission. The Earth's geopotential model data in terms of normalized spherical harmonic coefficients were taken from the web-site of the International Centre for Global Earth Models (ICGEM) in Potsdam. Theory: Accuracy and resolution estimation of global Earth's gravity field models is based on the analysis of degree variances of geopotential coefficients and their errors. During investigations for analyzing models were obtained dependences of approximation errors for gravity anomalies on the spherical harmonic expansion of the geopotential, relative errors of geopotential's spherical harmonic coefficients, degree variances for geopotential coefficients, and error variances of potential coefficients obtained from gravity anomalies. Delphi 7-based software developed by authors was used for the analysis of global Earth's gravity field models. Experience: The results of investigations show that spherical harmonic coefficients of all matched. Diagrams of degree variances for spherical harmonic coefficients and their errors bring us to the conclusion that the degree variances of most models equal to their error variances for a degree less than that declared by developers. The accuracy of normalized spherical harmonic coefficients of geopotential models is estimated as 10-9. This value characterizes both inherent errors of models, and the difference of coefficients in various models, as well as a scale poor predicted instability of the geopotential, and resolution. Furthermore, we compared the gravity anomalies computed by models with those

  6. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark. PMID:26964185

  7. High-accuracy biodistribution analysis of adeno-associated virus variants by double barcode sequencing

    PubMed Central

    Marsic, Damien; Méndez-Gómez, Héctor R; Zolotukhin, Sergei

    2015-01-01

    Biodistribution analysis is a key step in the evaluation of adeno-associated virus (AAV) capsid variants, whether natural isolates or produced by rational design or directed evolution. Indeed, when screening candidate vectors, accurate knowledge about which tissues are infected and how efficiently is essential. We describe the design, validation, and application of a new vector, pTR-UF50-BC, encoding a bioluminescent protein, a fluorescent protein and a DNA barcode, which can be used to visualize localization of transduction at the organism, organ, tissue, or cellular levels. In addition, by linking capsid variants to different barcoded versions of the vector and amplifying the barcode region from various tissue samples using barcoded primers, biodistribution of viral genomes can be analyzed with high accuracy and efficiency. PMID:26793739

  8. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  9. Accuracy analysis by using WARIMA model to forecast TEC in China

    NASA Astrophysics Data System (ADS)

    Liu, Lilong; Chen, Jun; Wu, Pituan; Cai, Chenghui; Huang, Liangke

    2015-12-01

    Aiming at the characteristic of nonlinear and non-stationary in ionospheric total electron content(TEC), this article bring Wavelet Analysis into the autoregressive integrated moving average model to forecast the next four days' TEC values by using six days' ionospheric grid observation data of Chinese area in 2010 provided by IGS station. Taking IGS station's observation data as true value, compare the forecast value with it then count the forecast accuracies which are to prove that it has a quite good result by using WARIMA model to forecast Chinese area's Ionospheric grid data. But near the geomagnetic latitude of about +/-20°grid, the model's forecast results are a little worse than others' because Geomagnetic activity is irregular which lead to the TEC values there change greatly.

  10. Diagnostic Accuracy of Calretinin for Malignant Mesothelioma in Serous Effusions: a Meta-analysis

    PubMed Central

    Li, Diandian; Wang, Bo; Long, Hongyu; Wen, Fuqiang

    2015-01-01

    Numerous studies have investigated the utility of calretinin in differentiating malignant mesothelioma (MM) from metastatic carcinoma (MC) in serous effusions. However, the results remain controversial. The aim of this study is to determine the overall accuracy of calretinin in serous effusions for MM through a meta-analysis of published studies. Publications addressing the accuracy of calretinin in the diagnosis of MM were selected from the Medline (Ovid), PubMed, the Cochrane Library Database and the Web of Science. Data from selected studies were pooled to yield summary sensitivity, specificity, positive and negative likelihood ratio (LR), diagnostic odds ratio (DOR), and receiver operating characteristic (SROC) curve. Statistical analysis was performed by Meta-Disc 1.4 and STATA 12.0 softwares. 18 studies met the inclusion criteria and the summary estimating for calretinin in the diagnosis of MM were: sensitivity 0.91 (95%CI: 0.87–0.94), specificity 0.96 (95%CI: 0.95–0.96), positive likelihood ratio (PLR) 14.42 (95%CI: 7.92–26.26), negative likelihood ratio (NLR) 0.1 (95%CI: 0.05–0.2) and diagnostic odds ratio 163.03 (95%CI: 54.62–486.63). The SROC curve indicated that the maximum joint sensitivity and specificity (Q-value) was 0.92; the area under the curve was 0.97. Our findings suggest that calretinin may be a useful diagnostic tool for confirming MM in serous effusions. PMID:25821016

  11. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  12. Computer-aided analysis of star shot films for high-accuracy radiation therapy treatment units

    NASA Astrophysics Data System (ADS)

    Depuydt, Tom; Penne, Rudi; Verellen, Dirk; Hrbacek, Jan; Lang, Stephanie; Leysen, Katrien; Vandevondel, Iwein; Poels, Kenneth; Reynders, Truus; Gevaert, Thierry; Duchateau, Michael; Tournel, Koen; Boussaer, Marlies; Cosentino, Dorian; Garibaldi, Cristina; Solberg, Timothy; De Ridder, Mark

    2012-05-01

    As mechanical stability of radiation therapy treatment devices has gone beyond sub-millimeter levels, there is a rising demand for simple yet highly accurate measurement techniques to support the routine quality control of these devices. A combination of using high-resolution radiosensitive film and computer-aided analysis could provide an answer. One generally known technique is the acquisition of star shot films to determine the mechanical stability of rotations of gantries and the therapeutic beam. With computer-aided analysis, mechanical performance can be quantified as a radiation isocenter radius size. In this work, computer-aided analysis of star shot film is further refined by applying an analytical solution for the smallest intersecting circle problem, in contrast to the gradient optimization approaches used until today. An algorithm is presented and subjected to a performance test using two different types of radiosensitive film, the Kodak EDR2 radiographic film and the ISP EBT2 radiochromic film. Artificial star shots with a priori known radiation isocenter size are used to determine the systematic errors introduced by the digitization of the film and the computer analysis. The estimated uncertainty on the isocenter size measurement with the presented technique was 0.04 mm (2σ) and 0.06 mm (2σ) for radiographic and radiochromic films, respectively. As an application of the technique, a study was conducted to compare the mechanical stability of O-ring gantry systems with C-arm-based gantries. In total ten systems of five different institutions were included in this study and star shots were acquired for gantry, collimator, ring, couch rotations and gantry wobble. It was not possible to draw general conclusions about differences in mechanical performance between O-ring and C-arm gantry systems, mainly due to differences in the beam-MLC alignment procedure accuracy. Nevertheless, the best performing O-ring system in this study, a BrainLab/MHI Vero system

  13. Diagnostic Accuracy of PIK3CA Mutation Detection by Circulating Free DNA in Breast Cancer: A Meta-Analysis of Diagnostic Test Accuracy

    PubMed Central

    Zhu, Hanjiang; Lin, Yan; Pan, Bo; Zhang, Xiaohui; Huang, Xin; Xu, Qianqian; Xu, Yali; Sun, Qiang

    2016-01-01

    Mutation of p110 alpha-catalytic subunit of phosphatidylinositol 3-kinase (PIK3CA) has high predictive and prognostic values for breast cancer. Hence, there has been a marked interest in detecting and monitoring PIK3CA genotype with non-invasive technique, such as circulating free DNA (cfDNA). However, the diagnostic accuracy of PIK3CA genotyping by cfDNA is still a problem of controversy. Here, we conducted the first meta-analysis to evaluate overall diagnostic performance of cfDNA for PIK3CA mutation detection. Literature search was performed in Pubmed, Embase and Cochrane Central Register of Controlled Trials databases. Seven cohorts from five studies with 247 patients were included. The pooled sensitivity, specificity, positive and negative likelihood ratio, diagnostic odds ratio and area under summary receiver operating characteristic curve were calculated for accuracy evaluation. The pooled sensitivity and specificity were 0.86 (95% confidence interval [CI] 0.32–0.99) and 0.98 (95% CI 0.86–1.00), respectively; the pooled positive and negative likelihood ratio were 42.8 (95% CI 5.1–356.9) and 0.14 (95% CI 0.02–1.34), respectively; diagnostic odds ratio for evaluating the overall diagnostic performance was 300 (95% CI 8–11867); area under summary receiver operating characteristic curve reached 0.99 (95% CI 0.97–0.99). Subgroup analysis with metastatic breast cancer revealed remarkable improvement in diagnostic performance (sensitivity: 0.86–0.91; specificity: 0.98; diagnostic odds ratio: 300–428). This meta-analysis proved that detecting PIK3CA gene mutation by cfDNA has high diagnostic accuracy in breast cancer, especially for metastatic breast cancer. It may serve as a reliable non-invasive assay for detecting and monitoring PIK3CA mutation status in order to deliver personalized and precise treatment. PMID:27336598

  14. Evaluating the Accuracy of Molecular Diagnostic Testing for Canine Visceral Leishmaniasis Using Latent Class Analysis

    PubMed Central

    Solcà, Manuela da Silva; Bastos, Leila Andrade; Guedes, Carlos Eduardo Sampaio; Bordoni, Marcelo; Borja, Lairton Souza; Larangeira, Daniela Farias; da Silva Estrela Tuy, Pétala Gardênia; Amorim, Leila Denise Alves Ferreira; Nascimento, Eliane Gomes; de Sá Oliveira, Geraldo Gileno; dos-Santos, Washington Luis Conrado; Fraga, Deborah Bittencourt Mothé; Veras, Patrícia Sampaio Tavares

    2014-01-01

    Host tissues affected by Leishmania infantum have differing degrees of parasitism. Previously, the use of different biological tissues to detect L. infantum DNA in dogs has provided variable results. The present study was conducted to evaluate the accuracy of molecular diagnostic testing (qPCR) in dogs from an endemic area for canine visceral leishmaniasis (CVL) by determining which tissue type provided the highest rate of parasite DNA detection. Fifty-one symptomatic dogs were tested for CVL using serological, parasitological and molecular methods. Latent class analysis (LCA) was performed for accuracy evaluation of these methods. qPCR detected parasite DNA in 100% of these animals from at least one of the following tissues: splenic and bone marrow aspirates, lymph node and skin fragments, blood and conjunctival swabs. Using latent variable as gold standard, the qPCR achieved a sensitivity of 95.8% (CI 90.4–100) in splenic aspirate; 79.2% (CI 68–90.3) in lymph nodes; 77.3% (CI 64.5–90.1) in skin; 75% (CI 63.1–86.9) in blood; 50% (CI 30–70) in bone marrow; 37.5% (CI 24.2–50.8) in left-eye; and 29.2% (CI 16.7–41.6) in right-eye conjunctival swabs. The accuracy of qPCR using splenic aspirates was further evaluated in a random larger sample (n = 800), collected from dogs during a prevalence study. The specificity achieved by qPCR was 76.7% (CI 73.7–79.6) for splenic aspirates obtained from the greater sample. The sensitivity accomplished by this technique was 95% (CI 93.5–96.5) that was higher than those obtained for the other diagnostic tests and was similar to that observed in the smaller sampling study. This confirms that the splenic aspirate is the most effective type of tissue for detecting L. infantum infection. Additionally, we demonstrated that LCA could be used to generate a suitable gold standard for comparative CVL testing. PMID:25076494

  15. Evaluating the accuracy of molecular diagnostic testing for canine visceral leishmaniasis using latent class analysis.

    PubMed

    Solcà, Manuela da Silva; Bastos, Leila Andrade; Guedes, Carlos Eduardo Sampaio; Bordoni, Marcelo; Borja, Lairton Souza; Larangeira, Daniela Farias; da Silva Estrela Tuy, Pétala Gardênia; Amorim, Leila Denise Alves Ferreira; Nascimento, Eliane Gomes; de Sá Oliveira, Geraldo Gileno; dos-Santos, Washington Luis Conrado; Fraga, Deborah Bittencourt Mothé; Veras, Patrícia Sampaio Tavares

    2014-01-01

    Host tissues affected by Leishmania infantum have differing degrees of parasitism. Previously, the use of different biological tissues to detect L. infantum DNA in dogs has provided variable results. The present study was conducted to evaluate the accuracy of molecular diagnostic testing (qPCR) in dogs from an endemic area for canine visceral leishmaniasis (CVL) by determining which tissue type provided the highest rate of parasite DNA detection. Fifty-one symptomatic dogs were tested for CVL using serological, parasitological and molecular methods. Latent class analysis (LCA) was performed for accuracy evaluation of these methods. qPCR detected parasite DNA in 100% of these animals from at least one of the following tissues: splenic and bone marrow aspirates, lymph node and skin fragments, blood and conjunctival swabs. Using latent variable as gold standard, the qPCR achieved a sensitivity of 95.8% (CI 90.4-100) in splenic aspirate; 79.2% (CI 68-90.3) in lymph nodes; 77.3% (CI 64.5-90.1) in skin; 75% (CI 63.1-86.9) in blood; 50% (CI 30-70) in bone marrow; 37.5% (CI 24.2-50.8) in left-eye; and 29.2% (CI 16.7-41.6) in right-eye conjunctival swabs. The accuracy of qPCR using splenic aspirates was further evaluated in a random larger sample (n = 800), collected from dogs during a prevalence study. The specificity achieved by qPCR was 76.7% (CI 73.7-79.6) for splenic aspirates obtained from the greater sample. The sensitivity accomplished by this technique was 95% (CI 93.5-96.5) that was higher than those obtained for the other diagnostic tests and was similar to that observed in the smaller sampling study. This confirms that the splenic aspirate is the most effective type of tissue for detecting L. infantum infection. Additionally, we demonstrated that LCA could be used to generate a suitable gold standard for comparative CVL testing. PMID:25076494

  16. Issues of model accuracy and uncertainty evaluation in the context of multi-model analysis

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Foglia, L.; Mehl, S.; Burlando, P.

    2009-12-01

    Thorough consideration of alternative conceptual models is an important and often neglected step in the study of many natural systems, including groundwater systems. This means that many modelling efforts are less useful for system management than they could be because they exclude alternatives considered important by some stakeholders, which makes them more vulnerable to criticism. Important steps include identifying reasonable alternative models and possibly using model discrimination criteria and associated model averaging to improve predictions and measures of prediction uncertainty. Here we use the computer code MMA (Multi-Model Analysis) to: (1) manage the model discrimination statistics produced by many alternative models, (2) mange predictions, and (3) calculate measures of prediction uncertainty. (1) to (3) also assist in understand the physical processes most important to model fit and predictions of interest. We focus on the ability of a groundwater model constructed using MODFLOW to predict heads and flows in the Maggia Valley, Southern Switzerland, where connections between groundwater, surface water and ecology are of interest. Sixty-four alternative models were designed deterministically and differ in how the river, recharge, bedrock topography, and hydraulic conductivity are characterized. None of the models correctly represent heads and flows in the Northern and Southern part of the valley simultaneously. A cross-validation experiment was conducted to compare model discrimination results with the ability of the models to predict eight heads and three flows to the stream along three reaches midway along the valley where ecological consequences and, therefore, model accuracy are of great concern. Results suggest: (1) Model averaging appears to have improved prediction accuracy in the problem considered. (2) The most significant model improvements occurred with introduction of spatially distributed recharge and improved bedrock topography. (3) The

  17. Analysis of the Accuracy of Ballistic Descent from a Circular Circumterrestrial Orbit

    NASA Astrophysics Data System (ADS)

    Sikharulidze, Yu. G.; Korchagin, A. N.

    2002-01-01

    The problem of the transportation of the results of experiments and observations to Earth every so often appears in space research. Its simplest and low-cost solution is the employment of a small ballistic reentry spacecraft. Such a spacecraft has no system of control of the descent trajectory in the atmosphere. This can result in a large spread of landing points, which make it difficult to search for the spacecraft and very often a safe landing. In this work, a choice of a compromise scheme of the flight is considered, which includes the optimum braking maneuver, adequate conditions of the entry into the atmosphere with limited heating and overload, and also the possibility of landing within the limits of a circle with a radius of 12.5 km. The following disturbing factors were taken into account in the analysis of the accuracy of landing: the errors of the braking impulse execution, the variations of the atmosphere density and the wind, the error of the specification of the ballistic coefficient of the reentry spacecraft, and a displacement of its center of mass from the symmetry axis. It is demonstrated that the optimum maneuver assures the maximum absolute value of the reentry angle and the insensitivity of the trajectory of descent with respect to small errors of orientation of the braking engine in the plane of the orbit. It is also demonstrated that the possible error of the landing point due to the error of specification of the ballistic coefficient does not depend (in the linear approximation) upon its value and depends only upon the reentry angle and the accuracy of specification of this coefficient. A guided parachute with an aerodynamic efficiency of about two should be used at the last leg of the reentry trajectory. This will allow one to land in a prescribed range and to produce adequate conditions for the interception of the reentry spacecraft by a helicopter in order to prevent a rough landing.

  18. Summary of Glaucoma Diagnostic Testing Accuracy: An Evidence-Based Meta-Analysis

    PubMed Central

    Ahmed, Saad; Khan, Zainab; Si, Francie; Mao, Alex; Pan, Irene; Yazdi, Fatemeh; Tsertsvadze, Alexander; Hutnik, Cindy; Moher, David; Tingey, David; Trope, Graham E.; Damji, Karim F.; Tarride, Jean-Eric; Goeree, Ron; Hodge, William

    2016-01-01

    Background New glaucoma diagnostic technologies are penetrating clinical care and are changing rapidly. Having a systematic review of these technologies will help clinicians and decision makers and help identify gaps that need to be addressed. This systematic review studied five glaucoma technologies compared to the gold standard of white on white perimetry for glaucoma detection. Methods OVID® interface: MEDLINE® (In-Process & Other Non-Indexed Citations), EMBASE®, BIOSIS Previews®, CINAHL®, PubMed, and the Cochrane Library were searched. A gray literature search was also performed. A technical expert panel, information specialists, systematic review method experts and biostatisticians were used. A PRISMA flow diagram was created and a random effect meta-analysis was performed. Results A total of 2,474 articles were screened. The greatest accuracy was found with frequency doubling technology (FDT) (diagnostic odds ratio (DOR): 57.7) followed by blue on yellow perimetry (DOR: 46.7), optical coherence tomography (OCT) (DOR: 41.8), GDx (DOR: 32.4) and Heidelberg retina tomography (HRT) (DOR: 17.8). Of greatest concern is that tests for heterogeneity were all above 50%, indicating that cutoffs used in these newer technologies were all very varied and not uniform across studies. Conclusions Glaucoma content experts need to establish uniform cutoffs for these newer technologies, so that studies that compare these technologies can be interpreted more uniformly. Nevertheless, synthesized data at this time demonstrate that amongst the newest technologies, OCT has the highest glaucoma diagnostic accuracy followed by GDx and then HRT. PMID:27540437

  19. Accuracy of different oxygenation indices in estimating intrapulmonary shunting at increasing infusion rates of dobutamine in horses under general anaesthesia.

    PubMed

    Briganti, A; Portela, D A; Grasso, S; Sgorbini, M; Tayari, H; Bassini, J R Fusar; Vitale, V; Romano, M S; Crovace, A; Breghi, G; Staffieri, F

    2015-06-01

    The aim of this study was to evaluate the correlation of commonly used oxygenation indices with venous admixture (Qs/Qt) in anaesthetised horses under different infusion rates of dobutamine. Six female horses were anaesthetised with acepromazine, xylazine, diazepam, ketamine, and isoflurane, and then intubated and mechanically ventilated with 100% O2. A Swan-Ganz catheter was introduced into the left jugular vein and its tip advanced into the pulmonary artery. Horses received different standardised rates of dobutamine. For each horse, eight samples of arterial and mixed venous blood were simultaneously obtained at fixed times. Arterial and venous haemoglobin (Hb) concentration and O2 saturation, arterial oxygen partial pressure (PaO2), venous oxygen partial pressure (PvO2), and barometric pressure were measured. Arterial (CaO2), mixed venous (CvO2), and capillary (Cc'O2) oxygen contents were calculated using standard formulae. The correlations between F-shunt, arterial oxygen tension to fraction of inspired oxygen ratio (PaO2/FiO2), arterial to alveolar oxygen tension ratio (PaO2/PAO2), alveolar to arterial oxygen tension difference (P[A - a]O2), and respiratory index (P[A - a]O2/PaO2) were tested with linear regression analysis. The goodness-of-fit for each calculated formula was evaluated by means of the coefficient of determination (r(2)). The agreement between Qs/Qt and F-shunt was analysed with the Bland-Altman test. All tested oxygen tension-based indices were weakly correlated (r(2) < 0.2) with the Qs/Qt, whereas F-shunt showed a stronger correlation (r(2) = 0.73). F-shunt also showed substantial agreement with Qs/Qt independent of the dobutamine infusion rate. F-shunt better correlated with Qs/Qt than other oxygen indices in isoflurane-anaesthetised horses under different infusion rates of dobutamine. PMID:25920771

  20. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  1. Accuracy in Rietveld quantitative phase analysis: a comparative study of strictly monochromatic Mo and Cu radiations

    PubMed Central

    León-Reina, L.; García-Maté, M.; Álvarez-Pinazo, G.; Santacruz, I.; Vallcorba, O.; De la Torre, A. G.; Aranda, M. A. G.

    2016-01-01

    This study reports 78 Rietveld quantitative phase analyses using Cu Kα1, Mo Kα1 and synchrotron radiations. Synchrotron powder diffraction has been used to validate the most challenging analyses. From the results for three series with increasing contents of an analyte (an inorganic crystalline phase, an organic crystalline phase and a glass), it is inferred that Rietveld analyses from high-energy Mo Kα1 radiation have slightly better accuracies than those obtained from Cu Kα1 radiation. This behaviour has been established from the results of the calibration graphics obtained through the spiking method and also from Kullback–Leibler distance statistic studies. This outcome is explained, in spite of the lower diffraction power for Mo radiation when compared to Cu radiation, as arising because of the larger volume tested with Mo and also because higher energy allows one to record patterns with fewer systematic errors. The limit of detection (LoD) and limit of quantification (LoQ) have also been established for the studied series. For similar recording times, the LoDs in Cu patterns, ∼0.2 wt%, are slightly lower than those derived from Mo patterns, ∼0.3 wt%. The LoQ for a well crystallized inorganic phase using laboratory powder diffraction was established to be close to 0.10 wt% in stable fits with good precision. However, the accuracy of these analyses was poor with relative errors near to 100%. Only contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%. PMID:27275132

  2. The Efficacy of Written Corrective Feedback in Improving L2 Written Accuracy: A Meta-Analysis

    ERIC Educational Resources Information Center

    Kang, EunYoung; Han, Zhaohong

    2015-01-01

    Written corrective feedback has been subject to increasing attention in recent years, in part because of the conceptual controversy surrounding it and in part because of its ubiquitous practice. This study takes a meta-analytic approach to synthesizing extant empirical research, including 21 primary studies. Guiding the analysis are two questions:…

  3. Measuring Speech Recognition Proficiency: A Psychometric Analysis of Speed and Accuracy

    ERIC Educational Resources Information Center

    Rader, Martha H.; Bailey, Glenn A.; Kurth, Linda A.

    2008-01-01

    This study examined the validity of various measures of speed and accuracy for assessing proficiency in speech recognition. The study specifically compared two different word-count indices for speed and accuracy (the 5-stroke word and the 1.4-syllable standard word) on a timing administered to 114 speech recognition students measured at 1-, 2-,…

  4. Accuracy analysis of direct georeferenced UAV images utilising low-cost navigation sensors

    NASA Astrophysics Data System (ADS)

    Briese, Christian; Wieser, Martin; Verhoeven, Geert; Glira, Philipp; Doneus, Michael; Pfeifer, Norbert

    2014-05-01

    control points should be used to improve the estimated values, especially to decrease the amount of systematic errors. For the bundle block adjustment the calibration of the camera and their temporal stability must be determined additionally. This contribution presents next to the theory a practical study on the accuracy analysis of direct georeferenced UAV imagery by low-cost navigation sensors. The analysis was carried out within the research project ARAP (automated (ortho)rectification of archaeological aerial photographs). The utilized UAS consists of the airplane "MAJA", manufactured by "Bormatec" (length: 1.2 m, wingspan: 2.2 m) equipped with the autopilot "ArduPilot Mega 2.5". For image acquisition the camera "Ricoh GR Digital IV" is utilised. The autopilot includes a GNSS receiver capable of DGPS (EGNOS), an inertial measurement system (INS), a barometer, and a magnetometer. In the study the achieved accuracies for the estimated position and orientation of the images are presented. The paper concludes with a summary of the remaining error sources and their possible corrections by applying further improvements on the utilised equipment and the direct georeferencing process.

  5. An Original Stepwise Multilevel Logistic Regression Analysis of Discriminatory Accuracy: The Case of Neighbourhoods and Health

    PubMed Central

    Wagner, Philippe; Ghith, Nermin; Leckie, George

    2016-01-01

    Background and Aim Many multilevel logistic regression analyses of “neighbourhood and health” focus on interpreting measures of associations (e.g., odds ratio, OR). In contrast, multilevel analysis of variance is rarely considered. We propose an original stepwise analytical approach that distinguishes between “specific” (measures of association) and “general” (measures of variance) contextual effects. Performing two empirical examples we illustrate the methodology, interpret the results and discuss the implications of this kind of analysis in public health. Methods We analyse 43,291 individuals residing in 218 neighbourhoods in the city of Malmö, Sweden in 2006. We study two individual outcomes (psychotropic drug use and choice of private vs. public general practitioner, GP) for which the relative importance of neighbourhood as a source of individual variation differs substantially. In Step 1 of the analysis, we evaluate the OR and the area under the receiver operating characteristic (AUC) curve for individual-level covariates (i.e., age, sex and individual low income). In Step 2, we assess general contextual effects using the AUC. Finally, in Step 3 the OR for a specific neighbourhood characteristic (i.e., neighbourhood income) is interpreted jointly with the proportional change in variance (i.e., PCV) and the proportion of ORs in the opposite direction (POOR) statistics. Results For both outcomes, information on individual characteristics (Step 1) provide a low discriminatory accuracy (AUC = 0.616 for psychotropic drugs; = 0.600 for choosing a private GP). Accounting for neighbourhood of residence (Step 2) only improved the AUC for choosing a private GP (+0.295 units). High neighbourhood income (Step 3) was strongly associated to choosing a private GP (OR = 3.50) but the PCV was only 11% and the POOR 33%. Conclusion Applying an innovative stepwise multilevel analysis, we observed that, in Malmö, the neighbourhood context per se had a negligible

  6. The psychology of intelligence analysis: drivers of prediction accuracy in world politics.

    PubMed

    Mellers, Barbara; Stone, Eric; Atanasov, Pavel; Rohrbaugh, Nick; Metz, S Emlen; Ungar, Lyle; Bishop, Michael M; Horowitz, Michael; Merkle, Ed; Tetlock, Philip

    2015-03-01

    This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs. PMID:25581088

  7. Accuracy analysis of mimetic finite volume operators on geodesic grids and a consistent alternative

    NASA Astrophysics Data System (ADS)

    Peixoto, Pedro S.

    2016-04-01

    Many newly developed climate, weather and ocean global models are based on quasi-uniform spherical polygonal grids, aiming for high resolution and better scalability. Thuburn et al. (2009) and Ringler et al. (2010) developed a C staggered finite volume/difference method for arbitrary polygonal spherical grids suitable for these next generation dynamical cores. This method has many desirable mimetic properties and became popular, being adopted in some recent models, in spite of being known to possess low order of accuracy. In this work, we show that, for the nonlinear shallow water equations on non-uniform grids, the method has potentially 3 main sources of inconsistencies (local truncation errors not converging to zero as the grid is refined): (i) the divergence term of the continuity equation, (ii) the perpendicular velocity and (iii) the kinetic energy terms of the vector invariant form of the momentum equations. Although some of these inconsistencies have not impacted the convergence on some standard shallow water test cases up until now, they may constitute a potential problem for high resolution 3D models. Based on our analysis, we propose modifications for the method that will make it first order accurate in the maximum norm. It preserves many of the mimetic properties, albeit having non-steady geostrophic modes on the f-sphere. Experimental results show that the resulting model is a more accurate alternative to the existing formulations and should provide means of having a consistent, computationally cheap and scalable atmospheric or ocean model on C staggered Voronoi grids.

  8. Objective analysis of the Gulf Stream thermal front: methods and accuracy. Technical report

    SciTech Connect

    Tracey, K.L.; Friedlander, A.I.; Watts, R.

    1987-12-01

    The objective-analysis (OA) technique was adapted by Watts and Tracey in order to map the thermal frontal zone of the Gulf Stream. Here, the authors test the robustness of the adapted OA technique to the selection of four control parameters: mean field, standard deviation field, correlation function, and decimation time. Output OA maps of the thermocline depth are most affected by the choice of mean field, with the most-realistic results produced using a time-averaged mean. The choice of the space-time correlation function has a large influence on the size of the estimated error fields, which are associated with the OA maps. The smallest errors occur using the analytic function based on 4 years of inverted echo sounder data collected in the same region of the Gulf Stream. Variations in the selection of the standard deviation field and decimation time have little effect on the output OA maps. Accuracy of the output OA maps is determined by comparing them with independent measurements of the thermal field. Two cases are evaluated: standard maps and high-temporal-resolution maps, with decimation times of 2 days and 1 day, respectively. Standard deviations (STD) between the standard maps at the 15% estimated error level and the XBTs (AXBTs) are determined to be 47-53 m. Comparisons of the high-temporal-resolution maps at the 20% error level with the XBTs (AXBTs) give STD differences of 47 m.

  9. Accuracy of ionospheric models used in GNSS and SBAS: methodology and analysis

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, A.; Juan, J. M.; Sanz, J.; González-Casado, G.; Ibáñez, D.

    2016-03-01

    The characterization of the accuracy of ionospheric models currently used in global navigation satellite systems (GNSSs) is a long-standing issue. The characterization remains a challenging problem owing to the lack of sufficiently accurate slant ionospheric determinations to be used as a reference. The present study proposes a methodology based on the comparison of the predictions of any ionospheric model with actual unambiguous carrier-phase measurements from a global distribution of permanent receivers. The differences are separated as hardware delays (a receiver constant plus a satellite constant) per day. The present study was conducted for the entire year of 2014, i.e. during the last solar cycle maximum. The ionospheric models assessed are the operational models broadcast by the global positioning system (GPS) and Galileo constellations, the satellite-based augmentation system (SBAS) (i.e. European Geostationary Navigation Overlay System (EGNOS) and wide area augmentation system (WAAS)), a number of post-process global ionospheric maps (GIMs) from different International GNSS Service (IGS) analysis centres (ACs) and, finally, a more sophisticated GIM computed by the research group of Astronomy and GEomatics (gAGE). Ionospheric models based on GNSS data and represented on a grid (IGS GIMs or SBAS) correct about 85 % of the total slant ionospheric delay, whereas the models broadcasted in the navigation messages of GPS and Galileo only account for about 70 %. Our gAGE GIM is shown to correct 95 % of the delay. The proposed methodology appears to be a useful tool to improve current ionospheric models.

  10. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  11. Diagnostic accuracy of ascitic cholesterol concentration for malignant ascites: a meta-analysis

    PubMed Central

    Zhu, Hong; Shen, Yongchun; Deng, Kai; Liu, Xia; Zhao, Yaqin; Liu, Taiguo; Huang, Ying

    2015-01-01

    Many studies have investigated whether ascitic cholesterol can aid in diagnosis of malignant related ascites (MRA), and the results have varied considerably. To gain a more reliable answer to this question, we meta-analyzed the literature on using ascitic cholesterol as diagnostic tests to help identify MRA. Literature databases were systematically searched for studies examining accuracy of ascitic cholesterol for diagnosing MRA. Data on sensitivity, specificity, positive/negative likelihood ratio (PLR/NLR), and diagnostic odds ratio (DOR) were pooled using random effects models. Summary receiver operating characteristic (SROC) curves and area under the curve (AUC) were used to summarize overall test performance. At last, our meta-analysis included 8 studies involving 743 subjects. Summary estimates for ascitic cholesterol in the diagnosis of MRA were as follows: sensitivity, 0.82 (95% CI 0.78 to 0.86); specificity, 0.90 (95% CI 0.87 to 0.93); PLR, 9.24 (95% CI 4.58 to 18.66); NLR, 0.16 (95% CI 0.08 to 0.32); and DOR, 66.96 (95% CI 18.83 to 238.11). The AUC was 0.96. The ascitic cholesterol level is helpful for the diagnosis of MRA. Nevertheless, the results of ascitic cholesterol assays should be interpreted in parallel with the results of traditional tests and clinical information. PMID:26770458

  12. The reliability, validity, and accuracy of self-reported absenteeism from work: a meta-analysis.

    PubMed

    Johns, Gary; Miraglia, Mariella

    2015-01-01

    Because of a variety of access limitations, self-reported absenteeism from work is often employed in research concerning health, organizational behavior, and economics, and it is ubiquitous in large scale population surveys in these domains. Several well established cognitive and social-motivational biases suggest that self-reports of absence will exhibit convergent validity with records-based measures but that people will tend to underreport the behavior. We used meta-analysis to summarize the reliability, validity, and accuracy of absence self-reports. The results suggested that self-reports of absenteeism offer adequate test-retest reliability and that they exhibit reasonably good rank order convergence with organizational records. However, people have a decided tendency to underreport their absenteeism, although such underreporting has decreased over time. Also, self-reports were more accurate when sickness absence rather than absence for any reason was probed. It is concluded that self-reported absenteeism might serve as a valid measure in some correlational research designs. However, when accurate knowledge of absolute absenteeism levels is essential, the tendency to underreport could result in flawed policy decisions. PMID:25181281

  13. Accuracy of non-rigid registration for local analysis of elasticity restrictions of the lungs

    NASA Astrophysics Data System (ADS)

    Stein, Daniel; Tetzlaff, Ralf; Wolf, Ivo; Meinzer, Hans-Peter

    2009-02-01

    Diseases of the lung often begin with regionally limited changes altering the tissue elasticity. Therefore, quantification of regional lung tissue motion would be desirable for early diagnosis, treatment monitoring, and follow-up. Dynamic MRI can capture such changes, but quantification requires non-rigid registration. However, analysis of dynamic MRI data of the lung is challenging due to inherently low image signal and contrast. Towards a computer-assisted quantification for regional lung diseases, we have evaluated two Demons-based registration methods for their accuracy in quantifying local lung motion on dynamic MRI data. The registration methods were applied on masked image data, which were pre-segmented with a graph-cut algorithm. Evaluation was performed on five datasets from healthy humans with nine time frames each. As gold standard, manually defined points (between 8 and 24) on prominent landmarks (essentially vessel structures) were used. The distance between these points and the predicted landmark location as well as the overlap (Dice coefficient) of the segmentations transformed with the deformation field were calculated. We found that the Demons algorithm performed better than the Symmetric Forces Demons algorithm with respect to average landmark distance (6.5 mm +/- 4.1 mm vs. 8.6 mm +/- 6.1 mm), but comparable regarding the Dice coefficient (0.946 +/- 0.018 vs. 0.961 +/- 0.018). Additionally, the Demons algorithm computes the deformation in only 10 seconds, whereas the Symmetric Forces Demons algorithm takes about 12 times longer.

  14. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  15. SU-E-J-37: Feasibility of Utilizing Carbon Fiducials to Increase Localization Accuracy of Lumpectomy Cavity for Partial Breast Irradiation

    SciTech Connect

    Zhang, Y; Hieken, T; Mutter, R; Park, S; Yan, E; Brinkmann, D; Pafundi, D

    2015-06-15

    Purpose To investigate the feasibility of utilizing carbon fiducials to increase localization accuracy of lumpectomy cavity for partial breast irradiation (PBI). Methods Carbon fiducials were placed intraoperatively in the lumpectomy cavity following resection of breast cancer in 11 patients. The patients were scheduled to receive whole breast irradiation (WBI) with a boost or 3D-conformal PBI. WBI patients were initially setup to skin tattoos using lasers, followed by orthogonal kV on-board-imaging (OBI) matching to bone per clinical practice. Cone beam CT (CBCT) was acquired weekly for offline review. For the boost component of WBI and PBI, patients were setup with lasers, followed by OBI matching to fiducials, with final alignment by CBCT matching to fiducials. Using carbon fiducials as a surrogate for the lumpectomy cavity and CBCT matching to fiducials as the gold standard, setup uncertainties to lasers, OBI bone, OBI fiducials, and CBCT breast were compared. Results Minimal imaging artifacts were introduced by fiducials on the planning CT and CBCT. The fiducials were sufficiently visible on OBI for online localization. The mean magnitude and standard deviation of setup errors were 8.4mm ± 5.3 mm (n=84), 7.3mm ± 3.7mm (n=87), 2.2mm ± 1.6mm (n=40) and 4.8mm ± 2.6mm (n=87), for lasers, OBI bone, OBI fiducials and CBCT breast tissue, respectively. Significant migration occurred in one of 39 implanted fiducials in a patient with a large postoperative seroma. Conclusion OBI carbon fiducial-based setup can improve localization accuracy with minimal imaging artifacts. With increased localization accuracy, setup uncertainties can be reduced from 8mm using OBI bone matching to 3mm using OBI fiducial matching for PBI treatment. This work demonstrates the feasibility of utilizing carbon fiducials to increase localization accuracy to the lumpectomy cavity for PBI. This may be particularly attractive for localization in the setting of proton therapy and other scenarios

  16. The effect of spatial resolution on decoding accuracy in fMRI multivariate pattern analysis.

    PubMed

    Gardumi, Anna; Ivanov, Dimo; Hausfeld, Lars; Valente, Giancarlo; Formisano, Elia; Uludağ, Kâmil

    2016-05-15

    Multivariate pattern analysis (MVPA) in fMRI has been used to extract information from distributed cortical activation patterns, which may go undetected in conventional univariate analysis. However, little is known about the physical and physiological underpinnings of MVPA in fMRI as well as about the effect of spatial smoothing on its performance. Several studies have addressed these issues, but their investigation was limited to the visual cortex at 3T with conflicting results. Here, we used ultra-high field (7T) fMRI to investigate the effect of spatial resolution and smoothing on decoding of speech content (vowels) and speaker identity from auditory cortical responses. To that end, we acquired high-resolution (1.1mm isotropic) fMRI data and additionally reconstructed them at 2.2 and 3.3mm in-plane spatial resolutions from the original k-space data. Furthermore, the data at each resolution were spatially smoothed with different 3D Gaussian kernel sizes (i.e. no smoothing or 1.1, 2.2, 3.3, 4.4, or 8.8mm kernels). For all spatial resolutions and smoothing kernels, we demonstrate the feasibility of decoding speech content (vowel) and speaker identity at 7T using support vector machine (SVM) MVPA. In addition, we found that high spatial frequencies are informative for vowel decoding and that the relative contribution of high and low spatial frequencies is different across the two decoding tasks. Moderate smoothing (up to 2.2mm) improved the accuracies for both decoding of vowels and speakers, possibly due to reduction of noise (e.g. residual motion artifacts or instrument noise) while still preserving information at high spatial frequency. In summary, our results show that - even with the same stimuli and within the same brain areas - the optimal spatial resolution for MVPA in fMRI depends on the specific decoding task of interest. PMID:26899782

  17. Digital core based transmitted ultrasonic wave simulation and velocity accuracy analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Shan, Rui

    2016-06-01

    Transmitted ultrasonic wave simulation (TUWS) in a digital core is one of the important elements of digital rock physics and is used to study wave propagation in porous cores and calculate equivalent velocity. When simulating wave propagates in a 3D digital core, two additional layers are attached to its two surfaces vertical to the wave-direction and one planar wave source and two receiver-arrays are properly installed. After source excitation, the two receivers then record incident and transmitted waves of the digital rock. Wave propagating velocity, which is the velocity of the digital core, is computed by the picked peak-time difference between the two recorded waves. To evaluate the accuracy of TUWS, a digital core is fully saturated with gas, oil, and water to calculate the corresponding velocities. The velocities increase with decreasing wave frequencies in the simulation frequency band, and this is considered to be the result of scattering. When the pore fluids are varied from gas to oil and finally to water, the velocity-variation characteristics between the different frequencies are similar, thereby approximately following the variation law of velocities obtained from linear elastic statics simulation (LESS), although their absolute values are different. However, LESS has been widely used. The results of this paper show that the transmission ultrasonic simulation has high relative precision.

  18. Quantitative analysis of accuracy of seismic wave-propagation codes in 3D random scattering media

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Imperatori, Walter; Mai, P. Martin

    2013-04-01

    Several recent verification studies (e.g. Day et al., 2001; Bielak et al., 2010, Chaljub et al., 2010) have demonstrated the importance of assessing the accuracy of available numerical tools at low frequency in presence of large-scale features (basins, topography, etc.). The fast progress in high-performance computing, including efficient optimization of numerical codes on petascale supercomputers, has permitted the simulation of 3D seismic wave propagation at frequencies of engineering interest (up to 10Hz) in highly heterogeneous media (e.g. Hartzell et al, 2010; Imperatori and Mai, 2013). However, high frequency numerical simulations involving random scattering media, characterized by small-scale heterogeneities, are much more challenging for most numerical methods, and their verification may therefore be even more crucial than in the low-frequency case. Our goal is to quantitatively compare the accuracy and the behavior of three different numerical codes for seismic wave propagation in 3D random scattering media at high frequency. We deploy a point source with omega-squared spectrum, and focus on the near-source region, being of great interest in strong motion seismology. We use two codes based on finite-difference method (FD1 and FD2) and one code based on support-operator method (SO). Both FD1 and FD2 are 4-th order staggered-grid finite-difference codes (for FD1 see Olsen et al., 2009; for FD2 see Moczo et al., 2007). The FD1 and FD2 codes are characterized by slightly different medium representations, since FD1 uses point values of material parameters in each FD-cell, while FD2 uses the effective material parameters at each grid-point (Moczo et al., 2002). SO is 2-nd order support-operator method (Ely et al., 2008). We considered models with random velocity perturbations described by van Karman correlation function with different correlation lengths and different standard deviations. Our results show significant variability in both phase and amplitude as

  19. 3D combinational curves for accuracy and performance analysis of positive biometrics identification

    NASA Astrophysics Data System (ADS)

    Du, Yingzi; Chang, Chein-I.

    2008-06-01

    The receiver operating characteristic (ROC) curve has been widely used as an evaluation criterion to measure the accuracy of biometrics system. Unfortunately, such an ROC curve provides no indication of the optimum threshold and cost function. In this paper, two kinds of 3D combinational curves are proposed: the 3D combinational accuracy curve and the 3D combinational performance curve. The 3D combinational accuracy curve gives a balanced view of the relationships among FAR (false alarm rate), FRR (false rejection rate), threshold t, and Cost. Six 2D curves can be derived from the 3D combinational accuracy curve: the conventional 2D ROC curve, 2D curve of (FRR, t), 2D curve of (FAR, t), 2D curve of (FRR, Cost), 2D curve of (FAR, Cost), and 2D curve of ( t, Cost). The 3D combinational performance curve can be derived from the 3D combinational accuracy curve which can give a balanced view among Security, Convenience, threshold t, and Cost. The advantages of using the proposed 3D combinational curves are demonstrated by iris recognition systems where the experimental results show that the proposed 3D combinational curves can provide more comprehensive information of the system accuracy and performance.

  20. Accuracy and Feasibility of Video Analysis for Assessing Hamstring Flexibility and Validity of the Sit-and-Reach Test

    ERIC Educational Resources Information Center

    Mier, Constance M.

    2011-01-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R greater than 0.97). Test-retest (separate days) reliability for…

  1. The Accuracy of Recidivism Risk Assessments for Sexual Offenders: A Meta-Analysis of 118 Prediction Studies

    ERIC Educational Resources Information Center

    Hanson, R. Karl; Morton-Bourgon, Kelly E.

    2009-01-01

    This review compared the accuracy of various approaches to the prediction of recidivism among sexual offenders. On the basis of a meta-analysis of 536 findings drawn from 118 distinct samples (45,398 sexual offenders, 16 countries), empirically derived actuarial measures were more accurate than unstructured professional judgment for all outcomes…

  2. Diagnostic test accuracy: methods for systematic review and meta-analysis.

    PubMed

    Campbell, Jared M; Klugar, Miloslav; Ding, Sandrine; Carmody, Dennis P; Hakonsen, Sasja J; Jadotte, Yuri T; White, Sarahlouise; Munn, Zachary

    2015-09-01

    Systematic reviews are carried out to provide an answer to a clinical question based on all available evidence (published and unpublished), to critically appraise the quality of studies, and account for and explain variations between the results of studies. The Joanna Briggs Institute specializes in providing methodological guidance for the conduct of systematic reviews and has developed methods and guidance for reviewers conducting systematic reviews of studies of diagnostic test accuracy. Diagnostic tests are used to identify the presence or absence of a condition for the purpose of developing an appropriate treatment plan. Owing to demands for improvements in speed, cost, ease of performance, patient safety, and accuracy, new diagnostic tests are continuously developed, and there are often several tests available for the diagnosis of a particular condition. In order to provide the evidence necessary for clinicians and other healthcare professionals to make informed decisions regarding the optimum test to use, primary studies need to be carried out on the accuracy of diagnostic tests and the results of these studies synthesized through systematic review. The Joanna Briggs Institute and its international collaboration have updated, revised, and developed new guidance for systematic reviews, including systematic reviews of diagnostic test accuracy. This methodological article summarizes that guidance and provides detailed advice on the effective conduct of systematic reviews of diagnostic test accuracy. PMID:26355602

  3. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  4. The Push for More Challenging Texts: An Analysis of Early Readers' Rate, Accuracy, and Comprehension

    ERIC Educational Resources Information Center

    Amendum, Steven J.; Conradi, Kristin; Liebfreund, Meghan D.

    2016-01-01

    The purpose of the study was to examine the relationship between the challenge level of text and early readers' reading comprehension. This relationship was also examined with consideration to students' word recognition accuracy and reading rate. Participants included 636 students, in Grades 1-3, in a southeastern state. Results suggest that…

  5. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  6. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  7. Multidimensional analysis of suction feeding performance in fishes: fluid speed, acceleration, strike accuracy and the ingested volume of water.

    PubMed

    Higham, Timothy E; Day, Steven W; Wainwright, Peter C

    2006-07-01

    Suction feeding fish draw prey into the mouth using a flow field that they generate external to the head. In this paper we present a multidimensional perspective on suction feeding performance that we illustrate in a comparative analysis of suction feeding ability in two members of Centrarchidae, the largemouth bass (Micropterus salmoides) and bluegill sunfish (Lepomis macrochirus). We present the first direct measurements of maximum fluid speed capacity, and we use this to calculate local fluid acceleration and volumetric flow rate. We also calculated the ingested volume and a novel metric of strike accuracy. In addition, we quantified for each species the effects of gape magnitude, time to peak gape, and swimming speed on features of the ingested volume of water. Digital particle image velocimetry (DPIV) and high-speed video were used to measure the flow in front of the mouths of three fish from each species in conjunction with a vertical laser sheet positioned on the mid-sagittal plane of the fish. From this we quantified the maximum fluid speed (in the earthbound and fish's frame of reference), acceleration and ingested volume. Our method for determining strike accuracy involved quantifying the location of the prey relative to the center of the parcel of ingested water. Bluegill sunfish generated higher fluid speeds in the earthbound frame of reference, accelerated the fluid faster, and were more accurate than largemouth bass. However, largemouth bass ingested a larger volume of water and generated a higher volumetric flow rate than bluegill sunfish. In addition, because largemouth bass swam faster during prey capture, they generated higher fluid speeds in the fish's frame of reference. Thus, while bluegill can exert higher drag forces on stationary prey items, largemouth bass more quickly close the distance between themselves and prey. The ingested volume and volumetric flow rate significantly increased as gape increased for both species, while time to peak

  8. Accuracy Analysis of a Robotic Radionuclide Inspection and Mapping System for Surface Contamination

    SciTech Connect

    Mauer, Georg F.; Kawa, Chris

    2008-01-15

    The mapping of localized regions of radionuclide contamination in a building can be a time consuming and costly task. Humans moving hand-held radiation detectors over the target areas are subject to fatigue. A contamination map based on manual surveys can contain significant operator-induced inaccuracies. A Fanuc M16i light industrial robot has been configured for installation on a mobile aerial work platform, such as a tall forklift. When positioned in front of a wall or floor surface, the robot can map the radiation levels over a surface area of up to 3 m by 3 m. The robot's end effector is a commercial alpha-beta radiation sensor, augmented with range and collision avoidance sensors to ensure operational safety as well as to maintain a constant gap between surface and radiation sensors. The accuracy and repeatability of the robotically conducted contamination surveys is directly influenced by the sensors and other hardware employed. This paper presents an in-depth analysis of various non-contact sensors for gap measurement, and the means to compensate for predicted systematic errors that arise during the area survey scans. The range sensor should maintain a constant gap between the radiation counter and the surface being inspected. The inspection robot scans the wall surface horizontally, moving down at predefined vertical intervals after each scan in a meandering pattern. A number of non-contact range sensors can be employed for the measurement of the gap between the robot end effector and the wall. The nominal gap width was specified as 10 mm, with variations during a single scan not to exceed {+-} 2 mm. Unfinished masonry or concrete walls typically exhibit irregularities, such as holes, gaps, or indentations in mortar joints. These irregularities can be sufficiently large to indicate a change of the wall contour. The responses of different sensor types to the wall irregularities vary, depending on their underlying principles of operation. We explored

  9. Spatio-Temporal Analysis of the Accuracy of Tropical Multisatellite Precipitation Analysis 3B42 Precipitation Data in Mid-High Latitudes of China

    PubMed Central

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998–2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  10. Spatio-temporal analysis of the accuracy of tropical multisatellite precipitation analysis 3B42 precipitation data in mid-high latitudes of China.

    PubMed

    Cai, Yancong; Jin, Changjie; Wang, Anzhi; Guan, Dexin; Wu, Jiabing; Yuan, Fenghui; Xu, Leilei

    2015-01-01

    Satellite-based precipitation data have contributed greatly to quantitatively forecasting precipitation, and provides a potential alternative source for precipitation data allowing researchers to better understand patterns of precipitation over ungauged basins. However, the absence of calibration satellite data creates considerable uncertainties for The Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 product over high latitude areas beyond the TRMM satellites latitude band (38°NS). This study attempts to statistically assess TMPA V7 data over the region beyond 40°NS using data obtained from numerous weather stations in 1998-2012. Comparative analysis at three timescales (daily, monthly and annual scale) indicates that adoption of a monthly adjustment significantly improved correlation at a larger timescale increasing from 0.63 to 0.95; TMPA data always exhibits a slight overestimation that is most serious at a daily scale (the absolute bias is 103.54%). Moreover, the performance of TMPA data varies across all seasons. Generally, TMPA data performs best in summer, but worst in winter, which is likely to be associated with the effects of snow/ice-covered surfaces and shortcomings of precipitation retrieval algorithms. Temporal and spatial analysis of accuracy indices suggest that the performance of TMPA data has gradually improved and has benefited from upgrades; the data are more reliable in humid areas than in arid regions. Special attention should be paid to its application in arid areas and in winter with poor scores of accuracy indices. Also, it is clear that the calibration can significantly improve precipitation estimates, the overestimation by TMPA in TRMM-covered area is about a third as much as that in no-TRMM area for monthly and annual precipitation. The systematic evaluation of TMPA over mid-high latitudes provides a broader understanding of satellite-based precipitation estimates, and these data are

  11. Canonical analysis for increased classification speed and channel selection

    NASA Technical Reports Server (NTRS)

    Eppler, W.

    1976-01-01

    The quadratic form can be expressed as a monotonically increasing sum of squares when the inverse covariance matrix is represented in canonical form. This formulation has the advantage that, in testing a particular class hypothesis, computations can be discontinued when the partial sum exceeds the smallest value obtained for other classes already tested. A method for channel selection is presented which arranges the original input measurements in that order which minimizes the expected number of computations. The classification algorithm was tested on data from LARS Flight Line C1 and found to reduce the sum-of-products operations by a factor of 6.7 in comparison with the conventional approach. In effect, the accuracy of a twelve-channel classification was achieved using only that CPU time required for a conventional four-channel classification.

  12. An evaluation of the accuracy and speed of metagenome analysis tools.

    PubMed

    Lindgreen, Stinus; Adair, Karen L; Gardner, Paul P

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html. PMID:26778510

  13. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  14. Georeferencing Accuracy Analysis of a Single WORLDVIEW-3 Image Collected Over Milan

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Roncoroni, F.; Brumana, R.; Previtali, M.

    2016-06-01

    The use of rational functions has become a standard for very high-resolution satellite imagery (VHRSI). On the other hand, the overall geolocalization accuracy via direct georeferencing from on board navigation components is much worse than image ground sampling distance (predicted < 3.5 m CE90 for WorldView-3, whereas GSD = 0.31 m for panchromatic images at nadir). This paper presents the georeferencing accuracy results obtained from a single WorldView-3 image processed with a bias compensated RPC camera model. Orientation results for an image collected over Milan are illustrated and discussed for both direct and indirect georeferencing strategies as well as different bias correction parameters estimated from a set of ground control points. Results highlight that the use of a correction based on two shift parameters is optimal for the considered dataset.

  15. Accuracy analysis for DSM and orthoimages derived from SPOT HRS stereo data using direct georeferencing

    NASA Astrophysics Data System (ADS)

    Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred

    During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute

  16. Accuracy aspects of stereo side-looking radar. [analysis of its visual perception and binocular vision

    NASA Technical Reports Server (NTRS)

    Leberl, F. W.

    1979-01-01

    The geometry of the radar stereo model and factors affecting visual radar stereo perception are reviewed. Limits to the vertical exaggeration factor of stereo radar are defined. Radar stereo model accuracies are analyzed with respect to coordinate errors caused by errors of radar sensor position and of range, and with respect to errors of coordinate differences, i.e., cross-track distances and height differences.

  17. Analysis of high accuracy, quantitative proteomics data in the MaxQB database.

    PubMed

    Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias

    2012-03-01

    MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database

  18. Measurement and accuracy analysis of refractive index using a specular reflectivity close to the total internal reflection

    NASA Astrophysics Data System (ADS)

    Li, Hui; Lu, Zukang; Xie, Shusen; Lin, Lei

    1998-08-01

    A new method to measure refractive index and the accuracy analysis as well is presented. The characteristic includes that the direction of incident light is not perpendicular to the interface but close to the critical angle of total internal reflection. That the specular reflectivity changes sharply near the critical angle implies that a high measuring sensitivity be reached easily. A narrow p- polarized laser beam and a prism or a quasi-semi-cylindrical lens in contact with a sample are applied in the apparatus. In order to match a high accuracy, a photoelectronic receiver with dual-channel divider is designed to compensate the stability of output of laser. One of the advantages of the method is its high accuracy. The uncertainty in the refractive index measurement is in the fourth decimal place at least. The exact direction of incident laser beam depends on the accuracy of result expected. Another outstanding advantage is its particularly straightforward in use experimental techniques. The method will be the most promising tool to study the response of refractive index to subtle changes of different conditions.

  19. Accuracy of bleeding scores for patients presenting with myocardial infarction: a meta-analysis of 9 studies and 13 759 patients

    PubMed Central

    D'Ascenzo, Fabrizio; Moretti, Claudio; Omedè, Pierluigi; Montefusco, Antonio; Bach, Richard G.; Alexander, Karen P.; Mehran, Roxana; Ariza-Solé, Albert; Zoccai, Giuseppe Biondi; Gaita, Fiorenzo

    2015-01-01

    Introduction Due to its negative impact on prognosis, a clear assessment of bleeding risk for patients presenting with acute coronary syndrome (ACS) remains crucial. Different risk scores have been proposed and compared, although with inconsistent results. Aim We performed a meta-analysis to evaluate the accuracy of different bleeding risk scores for ACS patients. Material and methods All studies externally validating risk scores for bleeding for patients presenting with ACS were included in the present review. Accuracy of risk scores for external validation cohorts to predict major bleeding in patients with ACS was the primary end point. Sensitivity analysis was performed according to clinical presentation (ST segment elevation myocardial infarction (STEMI) and non-ST segment elevation myocardial infarction (NSTEMI)). Results Nine studies and 13 759 patients were included. CRUSADE, ACUITY, ACTION and GRACE were the scores externally validated. The rate of in-hospital major bleeding was 7.80% (5.5–9.2), 2.05% (1.5–3.0) being related to access and 2.70% (1.7–4.0) needing transfusions. When evaluating all ACS patients, ACTION, CRUSADE and ACUITY performed similarly (AUC 0.75: 0.72–0.79; 0.71: 0.64–0.80 and 0.71: 0.63–0.77 respectively) when compared to GRACE (0.66; 0.64–0.67, all confidence intervals 95%). When appraising only STEMI patients, all the scores performed similarly, while CRUSADE was the only one externally validated for NSTEMI. For ACTION and ACUITY, accuracy increased for radial access patients, while no differences were found for CRUSADE. Conclusions ACTION, CRUSADE and ACUITY perform similarly to predict risk of bleeding in ACS patients. The CRUSADE score is the only one externally validated for NSTEMI, while accuracy of the scores increased with radial access. PMID:26677357

  20. Accuracy Analysis of Anisotropic Yield Functions based on the Root-Mean Square Error

    NASA Astrophysics Data System (ADS)

    Huh, Hoon; Lou, Yanshan; Bae, Gihyun; Lee, Changsoo

    2010-06-01

    This paper evaluates the accuracy of popular anisotropic yield functions based on the root-mean square error (RMSE) of the yield stresses and the R-values. The yield functions include Hill48, Yld89, Yld91, Yld96, Yld2000-2d, BBC2000 and Yld2000-18p yield criteria. Two kind steels and five kind aluminum alloys are selected for the accuracy evaluation. The anisotropic coefficients in yield functions are computed from the experimental data. The downhill simplex method is utilized for the parameter evaluation for the yield function except Hill48 and Yld89 yield functions after the error functions are constructed. The yield stresses and the R-values at every 15°from the rolling direction (RD) and the yield stress and R-value at equibiaxial tension conditions are predicted from each yield function. The predicted yield stresses and R-values are then compared with the experimental data. The root-mean square errors (RMSE) are computed to quantitatively evaluate the yield function. The RMSEs are calculated for the yield stresses and the R-values separately because the yield stress difference is much smaller that the difference in the R-values. The RMSEs of different yield functions are compared for each material. The Hill48 and Yld89 yield functions are the worst choices for the anisotropic description of the yield stress anisotropy while Yld91 yield function is the last choice for the modeling of the R-value directionality. Yld2000-2d and BBC2000 yield function have the same accuracy on the modeling of both the yield stress anisotropy and the R-value anisotropy. The best choice is Yld2000-18 yield function to accurately describe the yield tress and R-value directionalities of sheet metals.

  1. The Accuracy of Thyroid Nodule Ultrasound to Predict Thyroid Cancer: Systematic Review and Meta-Analysis

    PubMed Central

    Gionfriddo, Michael R.; Al Nofal, Alaa; Boehmer, Kasey R.; Leppin, Aaron L.; Reading, Carl; Callstrom, Matthew; Elraiyah, Tarig A.; Prokop, Larry J.; Stan, Marius N.; Murad, M. Hassan; Morris, John C.; Montori, Victor M.

    2014-01-01

    Context: Significant uncertainty remains surrounding the diagnostic accuracy of sonographic features used to predict the malignant potential of thyroid nodules. Objective: The objective of the study was to summarize the available literature related to the accuracy of thyroid nodule ultrasound (US) in the prediction of thyroid cancer. Methods: We searched multiple databases and reference lists for cohort studies that enrolled adults with thyroid nodules with reported diagnostic measures of sonography. A total of 14 relevant US features were analyzed. Results: We included 31 studies between 1985 and 2012 (number of nodules studied 18 288; average size 15 mm). The frequency of thyroid cancer was 20%. The most common type of cancer was papillary thyroid cancer (84%). The US nodule features with the highest diagnostic odds ratio for malignancy was being taller than wider [11.14 (95% confidence interval 6.6–18.9)]. Conversely, the US nodule features with the highest diagnostic odds ratio for benign nodules was spongiform appearance [12 (95% confidence interval 0.61–234.3)]. Heterogeneity across studies was substantial. Estimates of accuracy depended on the experience of the physician interpreting the US, the type of cancer and nodule (indeterminate), and type of reference standard. In a threshold model, spongiform appearance and cystic nodules were the only two features that, if present, could have avoided the use of fine-needle aspiration biopsy. Conclusions: Low- to moderate-quality evidence suggests that individual ultrasound features are not accurate predictors of thyroid cancer. Two features, cystic content and spongiform appearance, however, might predict benign nodules, but this has limited applicability to clinical practice due to their infrequent occurrence. PMID:24276450

  2. Analysis of the dose calculation accuracy for IMRT in lung: a 2D approach.

    PubMed

    Dvorak, Pavel; Stock, Markus; Kroupa, Bernhard; Bogner, Joachim; Georg, Dietmar

    2007-01-01

    The purpose of this study was to compare the dosimetric accuracy of IMRT plans for targets in lung with the accuracy of standard uniform-intensity conformal radiotherapy for different dose calculation algorithms. Tests were performed utilizing a special phantom manufactured from cork and polystyrene in order to quantify the uncertainty of two commercial TPS for IMRT in the lung. Ionization and film measurements were performed at various measuring points/planes. Additionally, single-beam and uniform-intensity multiple-beam tests were performed, in order to investigate deviations due to other characteristics of IMRT. Helax-TMS V6.1(A) was tested for 6, 10 and 25 MV and BrainSCAN 5.2 for 6 MV photon beams, respectively. Pencil beam (PB) with simple inhomogeneity correction and 'collapsed cone' (CC) algorithms were applied for dose calculations. However, the latter was not incorporated during optimization hence only post-optimization recalculation was tested. Two-dimensional dose distributions were evaluated applying the gamma index concept. Conformal plans showed the same accuracy as IMRT plans. Ionization chamber measurements detected deviations of up to 5% when a PB algorithm was used for IMRT dose calculations. Significant improvement (deviations approximately 2%) was observed when IMRT plans were recalculated with the CC algorithm, especially for the highest nominal energy. All gamma evaluations confirmed substantial improvement with the CC algorithm in 2D. While PB dose distributions showed most discrepancies in lower (<50%) and high (>90%) dose regions, the CC dose distributions deviated mainly in the high dose gradient (20-80%) region. The advantages of IMRT (conformity, intra-target dose control) should be counterbalanced with possible calculation inaccuracies for targets in the lung. Until no superior dose calculation algorithms are involved in the iterative optimization process it should be used with great care. When only PB algorithm with simple

  3. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  4. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  5. PIV measurements and data accuracy analysis of flow in complex terrain

    NASA Astrophysics Data System (ADS)

    Yao, Rentai; Hao, Hongwei; Qiao, Qingdang

    2000-10-01

    In this paper velocity fields and flow visualization in complex terrain in an environmental wind tunnel have been measured using PIV. In addition, it would be useful to appraise the PIV data by comparing the PIV results with those obtained from the well- established point measurement methods, such as constant temperature anemometry (CTA) and Dantec FlowMaster, in order to verify the accuracy of PIV measurements. The results indicate that PIV is a powerful tool for velocity measurements in the environmental wind tunnel.

  6. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley J.; Leviton, Douglas B.

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA's Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  7. Automation, Operation, and Data Analysis in the Cryogenic, High Accuracy, Refraction Measuring System (CHARMS)

    NASA Technical Reports Server (NTRS)

    Frey, Bradley; Leviton, Duoglas

    2005-01-01

    The Cryogenic High Accuracy Refraction Measuring System (CHARMS) at NASA s Goddard Space Flight Center has been enhanced in a number of ways in the last year to allow the system to accurately collect refracted beam deviation readings automatically over a range of temperatures from 15 K to well beyond room temperature with high sampling density in both wavelength and temperature. The engineering details which make this possible are presented. The methods by which the most accurate angular measurements are made and the corresponding data reduction methods used to reduce thousands of observed angles to a handful of refractive index values are also discussed.

  8. Analysis Article: Accuracy of the DIDGET Glucose Meter in Children and Young Adults with Diabetes

    PubMed Central

    Kim, Sarah

    2011-01-01

    Diabetes is one of the most common chronic diseases among American children. Although studies show that intensive management, including frequent glucose testing, improves diabetes control, this is difficult to accomplish. Bayer's DIDGET® glucose meter system pairs with a popular handheld video game system and couples good blood glucose testing habits with video-game-based rewards. In this issue, Deeb and colleagues performed a study demonstrating the accuracy of the DIDGET meter, a critical asset to this novel product designed to alleviate some of the challenges of managing pediatric diabetes. PMID:22027311

  9. Accuracy of matrix-assisted laser desorption ionization-time of flight mass spectrometry for identification of clinical pathogenic fungi: a meta-analysis.

    PubMed

    Ling, Huazhi; Yuan, Zhijie; Shen, Jilu; Wang, Zhongxin; Xu, Yuanhong

    2014-07-01

    Fungal infections in the clinic have become increasingly serious. In many cases, the identification of clinically relevant fungi remains time-consuming and may also be unreliable. Matrix-assisted laser desorption ionization-time of flight mass spectroscopy (MALDI-TOF MS) is a newly developed diagnostic tool that is increasingly being employed to rapidly and accurately identify clinical pathogenic microorganisms. The present meta-analysis aimed to systematically evaluate the accuracy of MALDI-TOF MS for the identification of clinical pathogenic fungi. After a rigorous selection process, 33 articles, involving 38 trials and a total of 9,977 fungal isolates, were included in the meta-analysis. The random-effects pooled identification accuracy of MALDI-TOF MS increased from 0.955 (95% confidence interval [CI], 0.939 to 0.969) at the species level to 0.977 (95% CI, 0.955 to 0.993) at the genus level (P < 0.001; χ(2) = 15.452). Subgroup analyses were performed at the species level for several categories, including strain, source of strain, system, system database, and modified outcomes, to calculate the accuracy and to investigate heterogeneity. These analyses revealed significant differences between the overall meta-analysis and some of the subanalyses. In parallel, significant differences in heterogeneity among different systems and among different methods for calculating the identification ratios were found by multivariate metaregression, but none of the factors, except for the moderator of outcome, was significantly associated with heterogeneity by univariate metaregression. In summary, the MALDI-TOF MS method is highly accurate for the identification of clinically pathogenic fungi; future studies should analyze the comprehensive capability of this technology for clinical diagnostic microbiology. PMID:24829234

  10. Accuracy of Matrix-Assisted Laser Desorption Ionization–Time of Flight Mass Spectrometry for Identification of Clinical Pathogenic Fungi: a Meta-Analysis

    PubMed Central

    Ling, Huazhi; Yuan, Zhijie; Shen, Jilu; Wang, Zhongxin

    2014-01-01

    Fungal infections in the clinic have become increasingly serious. In many cases, the identification of clinically relevant fungi remains time-consuming and may also be unreliable. Matrix-assisted laser desorption ionization–time of flight mass spectroscopy (MALDI-TOF MS) is a newly developed diagnostic tool that is increasingly being employed to rapidly and accurately identify clinical pathogenic microorganisms. The present meta-analysis aimed to systematically evaluate the accuracy of MALDI-TOF MS for the identification of clinical pathogenic fungi. After a rigorous selection process, 33 articles, involving 38 trials and a total of 9,977 fungal isolates, were included in the meta-analysis. The random-effects pooled identification accuracy of MALDI-TOF MS increased from 0.955 (95% confidence interval [CI], 0.939 to 0.969) at the species level to 0.977 (95% CI, 0.955 to 0.993) at the genus level (P < 0.001; χ2 = 15.452). Subgroup analyses were performed at the species level for several categories, including strain, source of strain, system, system database, and modified outcomes, to calculate the accuracy and to investigate heterogeneity. These analyses revealed significant differences between the overall meta-analysis and some of the subanalyses. In parallel, significant differences in heterogeneity among different systems and among different methods for calculating the identification ratios were found by multivariate metaregression, but none of the factors, except for the moderator of outcome, was significantly associated with heterogeneity by univariate metaregression. In summary, the MALDI-TOF MS method is highly accurate for the identification of clinically pathogenic fungi; future studies should analyze the comprehensive capability of this technology for clinical diagnostic microbiology. PMID:24829234