Reference-based phasing using the Haplotype Reference Consortium panel.
Loh, Po-Ru; Danecek, Petr; Palamara, Pier Francesco; Fuchsberger, Christian; A Reshef, Yakir; K Finucane, Hilary; Schoenherr, Sebastian; Forer, Lukas; McCarthy, Shane; Abecasis, Goncalo R; Durbin, Richard; L Price, Alkes
2016-11-01
Haplotype phasing is a fundamental problem in medical and population genetics. Phasing is generally performed via statistical phasing in a genotyped cohort, an approach that can yield high accuracy in very large cohorts but attains lower accuracy in smaller cohorts. Here we instead explore the paradigm of reference-based phasing. We introduce a new phasing algorithm, Eagle2, that attains high accuracy across a broad range of cohort sizes by efficiently leveraging information from large external reference panels (such as the Haplotype Reference Consortium; HRC) using a new data structure based on the positional Burrows-Wheeler transform. We demonstrate that Eagle2 attains a ∼20× speedup and ∼10% increase in accuracy compared to reference-based phasing using SHAPEIT2. On European-ancestry samples, Eagle2 with the HRC panel achieves >2× the accuracy of 1000 Genomes-based phasing. Eagle2 is open source and freely available for HRC-based phasing via the Sanger Imputation Service and the Michigan Imputation Server.
Badke, Yvonne M; Bates, Ronald O; Ernst, Catherine W; Fix, Justin; Steibel, Juan P
2014-04-16
Genomic selection has the potential to increase genetic progress. Genotype imputation of high-density single-nucleotide polymorphism (SNP) genotypes can improve the cost efficiency of genomic breeding value (GEBV) prediction for pig breeding. Consequently, the objectives of this work were to: (1) estimate accuracy of genomic evaluation and GEBV for three traits in a Yorkshire population and (2) quantify the loss of accuracy of genomic evaluation and GEBV when genotypes were imputed under two scenarios: a high-cost, high-accuracy scenario in which only selection candidates were imputed from a low-density platform and a low-cost, low-accuracy scenario in which all animals were imputed using a small reference panel of haplotypes. Phenotypes and genotypes obtained with the PorcineSNP60 BeadChip were available for 983 Yorkshire boars. Genotypes of selection candidates were masked and imputed using tagSNP in the GeneSeek Genomic Profiler (10K). Imputation was performed with BEAGLE using 128 or 1800 haplotypes as reference panels. GEBV were obtained through an animal-centric ridge regression model using de-regressed breeding values as response variables. Accuracy of genomic evaluation was estimated as the correlation between estimated breeding values and GEBV in a 10-fold cross validation design. Accuracy of genomic evaluation using observed genotypes was high for all traits (0.65-0.68). Using genotypes imputed from a large reference panel (accuracy: R(2) = 0.95) for genomic evaluation did not significantly decrease accuracy, whereas a scenario with genotypes imputed from a small reference panel (R(2) = 0.88) did show a significant decrease in accuracy. Genomic evaluation based on imputed genotypes in selection candidates can be implemented at a fraction of the cost of a genomic evaluation using observed genotypes and still yield virtually the same accuracy. On the other side, using a very small reference panel of haplotypes to impute training animals and candidates for selection results in lower accuracy of genomic evaluation.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision
Ender, Andreas; Mehl, Albert
2014-01-01
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007
USDA-ARS?s Scientific Manuscript database
A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...
Accuracy of Referring Provider and Endoscopist Impressions of Colonoscopy Indication.
Naveed, Mariam; Clary, Meredith; Ahn, Chul; Kubiliun, Nisa; Agrawal, Deepak; Cryer, Byron; Murphy, Caitlin; Singal, Amit G
2017-07-01
Background: Referring provider and endoscopist impressions of colonoscopy indication are used for clinical care, reimbursement, and quality reporting decisions; however, the accuracy of these impressions is unknown. This study assessed the sensitivity, specificity, positive and negative predictive value, and overall accuracy of methods to classify colonoscopy indication, including referring provider impression, endoscopist impression, and administrative algorithm compared with gold standard chart review. Methods: We randomly sampled 400 patients undergoing a colonoscopy at a Veterans Affairs health system between January 2010 and December 2010. Referring provider and endoscopist impressions of colonoscopy indication were compared with gold-standard chart review. Indications were classified into 4 mutually exclusive categories: diagnostic, surveillance, high-risk screening, or average-risk screening. Results: Of 400 colonoscopies, 26% were performed for average-risk screening, 7% for high-risk screening, 26% for surveillance, and 41% for diagnostic indications. Accuracy of referring provider and endoscopist impressions of colonoscopy indication were 87% and 84%, respectively, which were significantly higher than that of the administrative algorithm (45%; P <.001 for both). There was substantial agreement between endoscopist and referring provider impressions (κ=0.76). All 3 methods showed high sensitivity (>90%) for determining screening (vs nonscreening) indication, but specificity of the administrative algorithm was lower (40.3%) compared with referring provider (93.7%) and endoscopist (84.0%) impressions. Accuracy of endoscopist, but not referring provider, impression was lower in patients with a family history of colon cancer than in those without (65% vs 84%; P =.001). Conclusions: Referring provider and endoscopist impressions of colonoscopy indication are both accurate and may be useful data to incorporate into algorithms classifying colonoscopy indication. Copyright © 2017 by the National Comprehensive Cancer Network.
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-06-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies.
Mitt, Mario; Kals, Mart; Pärn, Kalle; Gabriel, Stacey B; Lander, Eric S; Palotie, Aarno; Ripatti, Samuli; Morris, Andrew P; Metspalu, Andres; Esko, Tõnu; Mägi, Reedik; Palta, Priit
2017-01-01
Genetic imputation is a cost-efficient way to improve the power and resolution of genome-wide association (GWA) studies. Current publicly accessible imputation reference panels accurately predict genotypes for common variants with minor allele frequency (MAF)≥5% and low-frequency variants (0.5≤MAF<5%) across diverse populations, but the imputation of rare variation (MAF<0.5%) is still rather limited. In the current study, we evaluate imputation accuracy achieved with reference panels from diverse populations with a population-specific high-coverage (30 ×) whole-genome sequencing (WGS) based reference panel, comprising of 2244 Estonian individuals (0.25% of adult Estonians). Although the Estonian-specific panel contains fewer haplotypes and variants, the imputation confidence and accuracy of imputed low-frequency and rare variants was significantly higher. The results indicate the utility of population-specific reference panels for human genetic studies. PMID:28401899
Bouwman, Aniek C; Veerkamp, Roel F
2014-10-03
The aim of this study was to determine the consequences of splitting sequencing effort over multiple breeds for imputation accuracy from a high-density SNP chip towards whole-genome sequence. Such information would assist for instance numerical smaller cattle breeds, but also pig and chicken breeders, who have to choose wisely how to spend their sequencing efforts over all the breeds or lines they evaluate. Sequence data from cattle breeds was used, because there are currently relatively many individuals from several breeds sequenced within the 1,000 Bull Genomes project. The advantage of whole-genome sequence data is that it carries the causal mutations, but the question is whether it is possible to impute the causal variants accurately. This study therefore focussed on imputation accuracy of variants with low minor allele frequency and breed specific variants. Imputation accuracy was assessed for chromosome 1 and 29 as the correlation between observed and imputed genotypes. For chromosome 1, the average imputation accuracy was 0.70 with a reference population of 20 Holstein, and increased to 0.83 when the reference population was increased by including 3 other dairy breeds with 20 animals each. When the same amount of animals from the Holstein breed were added the accuracy improved to 0.88, while adding the 3 other breeds to the reference population of 80 Holstein improved the average imputation accuracy marginally to 0.89. For chromosome 29, the average imputation accuracy was lower. Some variants benefitted from the inclusion of other breeds in the reference population, initially determined by the MAF of the variant in each breed, but even Holstein specific variants did gain imputation accuracy from the multi-breed reference population. This study shows that splitting sequencing effort over multiple breeds and combining the reference populations is a good strategy for imputation from high-density SNP panels towards whole-genome sequence when reference populations are small and sequencing effort is limiting. When sequencing effort is limiting and interest lays in multiple breeds or lines this provides imputation of each breed.
Shaw, Patricia; Zhang, Vivien; Metallinos-Katsaras, Elizabeth
2009-02-01
The objective of this study was to examine the quantity and accuracy of dietary supplement (DS) information through magazines with high adolescent readership. Eight (8) magazines (3 teen and 5 adult with high teen readership) were selected. A content analysis for DS was conducted on advertisements and editorials (i.e., articles, advice columns, and bulletins). Noted claims/cautions regarding DS were evaluated for accuracy using Medlineplus.gov and Naturaldatabase.com. Claims for dietary supplements with three or more types of ingredients and those in advertisements were not evaluated. Advertisements were evaluated with respect to size, referenced research, testimonials, and Dietary Supplement Health and Education Act of 1994 (DSHEA) warning visibility. Eighty-eight (88) issues from eight magazines yielded 238 DS references. Fifty (50) issues from five magazines contained no DS reference. Among teen magazines, seven DS references were found: five in the editorials and two in advertisements. In adult magazines, 231 DS references were found: 139 in editorials and 92 in advertisements. Of the 88 claims evaluated, 15% were accurate, 23% were inconclusive, 3% were inaccurate, 5% were partially accurate, and 55% were unsubstantiated (i.e., not listed in reference databases). Of the 94 DS evaluated in advertisements, 43% were full page or more, 79% did not have a DSHEA warning visible, 46% referred to research, and 32% used testimonials. Teen magazines contain few references to DS, none accurate. Adult magazines that have a high teen readership contain a substantial amount of DS information with questionable accuracy, raising concerns that this information may increase the chances of inappropriate DS use by adolescents, thereby increasing the potential for unexpected effects or possible harm.
Thematic accuracy assessment of the 2011 National Land Cover Database (NLCD)
Wickham, James; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Sorenson, Daniel G.; Granneman, Brian J.; Poss, Richard V.; Baer, Lori Anne
2017-01-01
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest loss, forest gain, and urban gain had user's accuracies that exceeded 70%. Lower user's accuracies for the other change reporting themes may be attributable to the difficulty in determining the context of grass (e.g., open urban, grassland, agriculture) and between the components of the forest-shrubland-grassland gradient at either the mapping phase, reference label assignment phase, or both. NLCD 2011 user's accuracies for forest loss, forest gain, and urban gain compare favorably with results from other land cover change accuracy assessments.
Lin, You-Yu; Hsieh, Chia-Hung; Chen, Jiun-Hong; Lu, Xuemei; Kao, Jia-Horng; Chen, Pei-Jer; Chen, Ding-Shinn; Wang, Hurng-Yi
2017-04-26
The accuracy of metagenomic assembly is usually compromised by high levels of polymorphism due to divergent reads from the same genomic region recognized as different loci when sequenced and assembled together. A viral quasispecies is a group of abundant and diversified genetically related viruses found in a single carrier. Current mainstream assembly methods, such as Velvet and SOAPdenovo, were not originally intended for the assembly of such metagenomics data, and therefore demands for new methods to provide accurate and informative assembly results for metagenomic data. In this study, we present a hybrid method for assembling highly polymorphic data combining the partial de novo-reference assembly (PDR) strategy and the BLAST-based assembly pipeline (BBAP). The PDR strategy generates in situ reference sequences through de novo assembly of a randomly extracted partial data set which is subsequently used for the reference assembly for the full data set. BBAP employs a greedy algorithm to assemble polymorphic reads. We used 12 hepatitis B virus quasispecies NGS data sets from a previous study to assess and compare the performance of both PDR and BBAP. Analyses suggest the high polymorphism of a full metagenomic data set leads to fragmentized de novo assembly results, whereas the biased or limited representation of external reference sequences included fewer reads into the assembly with lower assembly accuracy and variation sensitivity. In comparison, the PDR generated in situ reference sequence incorporated more reads into the final PDR assembly of the full metagenomics data set along with greater accuracy and higher variation sensitivity. BBAP assembly results also suggest higher assembly efficiency and accuracy compared to other assembly methods. Additionally, BBAP assembly recovered HBV structural variants that were not observed amongst assembly results of other methods. Together, PDR/BBAP assembly results were significantly better than other compared methods. Both PDR and BBAP independently increased the assembly efficiency and accuracy of highly polymorphic data, and assembly performances were further improved when used together. BBAP also provides nucleotide frequency information. Together, PDR and BBAP provide powerful tools for metagenomic data studies.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
Certified ion implantation fluence by high accuracy RBS.
Colaux, Julien L; Jeynes, Chris; Heasman, Keith C; Gwilliam, Russell M
2015-05-07
From measurements over the last two years we have demonstrated that the charge collection system based on Faraday cups can robustly give near-1% absolute implantation fluence accuracy for our electrostatically scanned 200 kV Danfysik ion implanter, using four-point-probe mapping with a demonstrated accuracy of 2%, and accurate Rutherford backscattering spectrometry (RBS) of test implants from our quality assurance programme. The RBS is traceable to the certified reference material IRMM-ERM-EG001/BAM-L001, and involves convenient calibrations both of the electronic gain of the spectrometry system (at about 0.1% accuracy) and of the RBS beam energy (at 0.06% accuracy). We demonstrate that accurate RBS is a definitive method to determine quantity of material. It is therefore useful for certifying high quality reference standards, and is also extensible to other kinds of samples such as thin self-supporting films of pure elements. The more powerful technique of Total-IBA may inherit the accuracy of RBS.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
NASA Astrophysics Data System (ADS)
Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.
2016-06-01
For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with the DGNSS solution to better than 2.9 cm RMSE Horizontal and 5.5 cm RMSE Vertical. Such accuracies are sufficient to meet the requirements for a majority of airborne mapping applications.
Angheben, Andrea; Staffolani, Silvia; Anselmi, Mariella; Tais, Stefano; Degani, Monica; Gobbi, Federico; Buonfrate, Dora; Gobbo, Maria; Bisoffi, Zeno
2017-11-01
We analyzed the accuracy of Chagas Quick Test ® , a rapid diagnostic test, for the diagnosis of chronic Chagas disease through a retrospective study on a cohort of 669 patients consecutively examined at a single reference center in Italy, during a 7-year period. We observed high concordance with serological reference standard but low accuracy for screening purposes (sensitivity/specificity: 82.8%/98.7%) at least in our nonendemic context.
Ender, Andreas; Mehl, Albert
2015-01-01
To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.
NASA Astrophysics Data System (ADS)
Cao, C.; Lee, X.; Xu, J.
2017-12-01
Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.
Clinical accuracy of point-of-care urine culture in general practice.
Holm, Anne; Cordoba, Gloria; Sørensen, Tina Møller; Jessen, Lisbeth Rem; Frimodt-Møller, Niels; Siersma, Volkert; Bjerrum, Lars
2017-06-01
To assess the clinical accuracy (sensitivity (SEN), specificity (SPE), positive predictive value and negative predictive value) of two point-of-care (POC) urine culture tests for the identification of urinary tract infection (UTI) in general practice. Prospective diagnostic accuracy study comparing two index tests (Flexicult™ SSI-Urinary Kit or ID Flexicult™) with a reference standard (urine culture performed in the microbiological department). General practice in the Copenhagen area patients. Adult female patients consulting their general practitioner with suspected uncomplicated, symptomatic UTI. (1) Overall accuracy of POC urine culture in general practice. (2) Individual accuracy of each of the two POC tests in this study. (3) Accuracy of POC urine culture in general practice with enterococci excluded, since enterococci are known to multiply in boric acid used for transportation for the reference standard. (4) Accuracy based on expert reading of photographs of POC urine cultures performed in general practice. Standard culture performed in the microbiological department was used as reference standard for all four measures. Twenty general practices recruited 341 patients with suspected uncomplicated UTI. The overall agreement between index test and reference was 0.76 (CI: 0.71-0.80), SEN 0.88 (CI: 0.83-0.92) and SPE 0.55 (CI: 0.46-0.64). The two POC tests produced similar results individually. Overall agreement with enterococci excluded was 0.82 (0.77-0.86) and agreement between expert readings of photographs and reference results was 0.81 (CI: 0.76-0.85). POC culture used in general practice has high SEN but low SPE. Low SPE could be due to both misinterpretation in general practice and an imperfect reference standard. Registration number: ClinicalTrials.gov NCT02323087.
Thematic Accuracy Assessment of the 2011 National Land ...
Accuracy assessment is a standard protocol of National Land Cover Database (NLCD) mapping. Here we report agreement statistics between map and reference labels for NLCD 2011, which includes land cover for ca. 2001, ca. 2006, and ca. 2011. The two main objectives were assessment of agreement between map and reference labels for the three, single-date NLCD land cover products at Level II and Level I of the classification hierarchy, and agreement for 17 land cover change reporting themes based on Level I classes (e.g., forest loss; forest gain; forest, no change) for three change periods (2001–2006, 2006–2011, and 2001–2011). The single-date overall accuracies were 82%, 83%, and 83% at Level II and 88%, 89%, and 89% at Level I for 2011, 2006, and 2001, respectively. Many class-specific user's accuracies met or exceeded a previously established nominal accuracy benchmark of 85%. Overall accuracies for 2006 and 2001 land cover components of NLCD 2011 were approximately 4% higher (at Level II and Level I) than the overall accuracies for the same components of NLCD 2006. The high Level I overall, user's, and producer's accuracies for the single-date eras in NLCD 2011 did not translate into high class-specific user's and producer's accuracies for many of the 17 change reporting themes. User's accuracies were high for the no change reporting themes, commonly exceeding 85%, but were typically much lower for the reporting themes that represented change. Only forest l
Nedelcu, R; Olsson, P; Nyström, I; Rydén, J; Thor, A
2018-02-01
To evaluate a novel methodology using industrial scanners as a reference, and assess in vivo accuracy of 3 intraoral scanners (IOS) and conventional impressions. Further, to evaluate IOS precision in vivo. Four reference-bodies were bonded to the buccal surfaces of upper premolars and incisors in five subjects. After three reference-scans, ATOS Core 80 (ATOS), subjects were scanned three times with three IOS systems: 3M True Definition (3M), CEREC Omnicam (OMNI) and Trios 3 (TRIOS). One conventional impression (IMPR) was taken, 3M Impregum Penta Soft, and poured models were digitized with laboratory scanner 3shape D1000 (D1000). Best-fit alignment of reference-bodies and 3D Compare Analysis was performed. Precision of ATOS and D1000 was assessed for quantitative evaluation and comparison. Accuracy of IOS and IMPR were analyzed using ATOS as reference. Precision of IOS was evaluated through intra-system comparison. Precision of ATOS reference scanner (mean 0.6 μm) and D1000 (mean 0.5 μm) was high. Pairwise multiple comparisons of reference-bodies located in different tooth positions displayed a statistically significant difference of accuracy between two scanner-groups: 3M and TRIOS, over OMNI (p value range 0.0001 to 0.0006). IMPR did not show any statistically significant difference to IOS. However, deviations of IOS and IMPR were within a similar magnitude. No statistical difference was found for IOS precision. The methodology can be used for assessing accuracy of IOS and IMPR in vivo in up to five units bilaterally from midline. 3M and TRIOS had a higher accuracy than OMNI. IMPR overlapped both groups. Intraoral scanners can be used as a replacement for conventional impressions when restoring up to ten units without extended edentulous spans. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Celestial Reference Frames at Multiple Radio Wavelengths
NASA Technical Reports Server (NTRS)
Jacobs, Christopher S.
2012-01-01
In 1997 the IAU adopted the International Celestial Reference Frame (ICRF) built from S/X VLBI data. In response to IAU resolutions encouraging the extension of the ICRF to additional frequency bands, VLBI frames have been made at 24, 32, and 43 gigahertz. Meanwhile, the 8.4 gigahertz work has been greatly improved with the 2009 release of the ICRF-2. This paper discusses the motivations for extending the ICRF to these higher radio bands. Results to date will be summarized including evidence that the high frequency frames are rapidly approaching the accuracy of the 8.4 gigahertz ICRF-2. We discuss current limiting errors and prospects for the future accuracy of radio reference frames. We note that comparison of multiple radio frames is characterizing the frequency dependent systematic noise floor from extended source morphology and core shift. Finally, given Gaia's potential for high accuracy optical astrometry, we have simulated the precision of a radio-optical frame tie to be approximately10-15 microarcseconds ((1-sigma) (1-standard deviation), per component).
Reference Accuracy among Research Articles Published in "Research on Social Work Practice"
ERIC Educational Resources Information Center
Wilks, Scott E.; Geiger, Jennifer R.; Bates, Samantha M.; Wright, Amy L.
2017-01-01
Objective: The objective was to examine reference errors in research articles published in Research on Social Work Practice. High rates of reference errors in other top social work journals have been noted in previous studies. Methods: Via a sampling frame of 22,177 total references among 464 research articles published in the previous decade, a…
Accuracy assessment of NOAA's daily reference evapotranspiration maps for the Texas High Plains
USDA-ARS?s Scientific Manuscript database
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference ET for the continental U.S. using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large scale spatial representation for reference ET, which is essential for regional scal...
Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska
Selkowitz, D.J.; Stehman, S.V.
2011-01-01
The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.
Grossi, D A; Brito, L F; Jafarikia, M; Schenkel, F S; Feng, Z
2018-04-30
The uptake of genomic selection (GS) by the swine industry is still limited by the costs of genotyping. A feasible alternative to overcome this challenge is to genotype animals using an affordable low-density (LD) single nucleotide polymorphism (SNP) chip panel followed by accurate imputation to a high-density panel. Therefore, the main objective of this study was to screen incremental densities of LD panels in order to systematically identify one that balances the tradeoffs among imputation accuracy, prediction accuracy of genomic estimated breeding values (GEBVs), and genotype density (directly associated with genotyping costs). Genotypes using the Illumina Porcine60K BeadChip were available for 1378 Duroc (DU), 2361 Landrace (LA) and 3192 Yorkshire (YO) pigs. In addition, pseudo-phenotypes (de-regressed estimated breeding values) for five economically important traits were provided for the analysis. The reference population for genotyping imputation consisted of 931 DU, 1631 LA and 2103 YO animals and the remainder individuals were included in the validation population of each breed. A LD panel of 3000 evenly spaced SNPs (LD3K) yielded high imputation accuracy rates: 93.78% (DU), 97.07% (LA) and 97.00% (YO) and high correlations (>0.97) between the predicted GEBVs using the actual 60 K SNP genotypes and the imputed 60 K SNP genotypes for all traits and breeds. The imputation accuracy was influenced by the reference population size as well as the amount of parental genotype information available in the reference population. However, parental genotype information became less important when the LD panel had at least 3000 SNPs. The correlation of the GEBVs directly increased with an increase in imputation accuracy. When genotype information for both parents was available, a panel of 300 SNPs (imputed to 60 K) yielded GEBV predictions highly correlated (⩾0.90) with genomic predictions obtained based on the true 60 K panel, for all traits and breeds. For a small reference population size with no parents on reference population, it is recommended the use of a panel at least as dense as the LD3K and, when there are two parents in the reference population, a panel as small as the LD300 might be a feasible option. These findings are of great importance for the development of LD panels for swine in order to reduce genotyping costs, increase the uptake of GS and, therefore, optimize the profitability of the swine industry.
NASA Astrophysics Data System (ADS)
Green, K. N.; van Alstine, R. L.
This paper presents the current performance levels of the SDG-5 gyro, a high performance two-axis dynamically tuned gyro, and the DRIRU II redundant inertial reference unit relating to stabilization and pointing applications. Also presented is a discussion of a product improvement program aimed at further noise reductions to meet the demanding requirements of future space defense applications.
Bailey, Timothy S; Klaff, Leslie J; Wallace, Jane F; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A
2016-07-01
As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects' self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (-0.64% to -2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. © 2016 Diabetes Technology Society.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Geoscience laser altimeter system-stellar reference system
NASA Astrophysics Data System (ADS)
Millar, Pamela S.; Sirota, J. Marcos
1998-01-01
GLAS is an EOS space-based laser altimeter being developed to profile the height of the Earth's ice sheets with ~15 cm single shot accuracy from space under NASA's Mission to Planet Earth (MTPE). The primary science goal of GLAS is to determine if the ice sheets are increasing or diminishing for climate change modeling. This is achieved by measuring the ice sheet heights over Greenland and Antarctica to 1.5 cm/yr over 100 km×100 km areas by crossover analysis (Zwally 1994). This measurement performance requires the instrument to determine the pointing of the laser beam to ~5 urad (1 arcsecond), 1-sigma, with respect to the inertial reference frame. The GLAS design incorporates a stellar reference system (SRS) to relate the laser beam pointing angle to the star field with this accuracy. This is the first time a spaceborne laser altimeter is measuring pointing to such high accuracy. The design for the stellar reference system combines an attitude determination system (ADS) with a laser reference system (LRS) to meet this requirement. The SRS approach and expected performance are described in this paper.
High-density marker imputation accuracy in sixteen French cattle breeds.
Hozé, Chris; Fouilloux, Marie-Noëlle; Venot, Eric; Guillaume, François; Dassonneville, Romain; Fritz, Sébastien; Ducrocq, Vincent; Phocas, Florence; Boichard, Didier; Croiseau, Pascal
2013-09-03
Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777,609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations.
High-density marker imputation accuracy in sixteen French cattle breeds
2013-01-01
Background Genotyping with the medium-density Bovine SNP50 BeadChip® (50K) is now standard in cattle. The high-density BovineHD BeadChip®, which contains 777 609 single nucleotide polymorphisms (SNPs), was developed in 2010. Increasing marker density increases the level of linkage disequilibrium between quantitative trait loci (QTL) and SNPs and the accuracy of QTL localization and genomic selection. However, re-genotyping all animals with the high-density chip is not economically feasible. An alternative strategy is to genotype part of the animals with the high-density chip and to impute high-density genotypes for animals already genotyped with the 50K chip. Thus, it is necessary to investigate the error rate when imputing from the 50K to the high-density chip. Methods Five thousand one hundred and fifty three animals from 16 breeds (89 to 788 per breed) were genotyped with the high-density chip. Imputation error rates from the 50K to the high-density chip were computed for each breed with a validation set that included the 20% youngest animals. Marker genotypes were masked for animals in the validation population in order to mimic 50K genotypes. Imputation was carried out using the Beagle 3.3.0 software. Results Mean allele imputation error rates ranged from 0.31% to 2.41% depending on the breed. In total, 1980 SNPs had high imputation error rates in several breeds, which is probably due to genome assembly errors, and we recommend to discard these in future studies. Differences in imputation accuracy between breeds were related to the high-density-genotyped sample size and to the genetic relationship between reference and validation populations, whereas differences in effective population size and level of linkage disequilibrium showed limited effects. Accordingly, imputation accuracy was higher in breeds with large populations and in dairy breeds than in beef breeds. More than 99% of the alleles were correctly imputed if more than 300 animals were genotyped at high-density. No improvement was observed when multi-breed imputation was performed. Conclusion In all breeds, imputation accuracy was higher than 97%, which indicates that imputation to the high-density chip was accurate. Imputation accuracy depends mainly on the size of the reference population and the relationship between reference and target populations. PMID:24004563
Bailey, Timothy S.; Klaff, Leslie J.; Wallace, Jane F.; Greene, Carmine; Pardo, Scott; Harrison, Bern; Simmons, David A.
2016-01-01
Background: As blood glucose monitoring system (BGMS) accuracy is based on comparison of BGMS and laboratory reference glucose analyzer results, reference instrument accuracy is important to discriminate small differences between BGMS and reference glucose analyzer results. Here, we demonstrate the important role of reference glucose analyzer accuracy in BGMS accuracy evaluations. Methods: Two clinical studies assessed the performance of a new BGMS, using different reference instrument procedures. BGMS and YSI analyzer results were compared for fingertip blood that was obtained by untrained subjects’ self-testing and study staff testing, respectively. YSI analyzer accuracy was monitored using traceable serum controls. Results: In study 1 (N = 136), 94.1% of BGMS results were within International Organization for Standardization (ISO) 15197:2013 accuracy criteria; YSI analyzer serum control results showed a negative bias (−0.64% to −2.48%) at the first site and a positive bias (3.36% to 6.91%) at the other site. In study 2 (N = 329), 97.8% of BGMS results were within accuracy criteria; serum controls showed minimal bias (<0.92%) at both sites. Conclusions: These findings suggest that the ability to demonstrate that a BGMS meets accuracy guidelines is influenced by reference instrument accuracy. PMID:26902794
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains
Moorhead, Jerry; Gowda, Prasanna H.; Hobbins, Michael; Senay, Gabriel; Paul, George; Marek, Thomas; Porter, Dana
2015-01-01
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which is essential for regional scale water resources management. Data used in the development of NOAA daily ETref maps are derived from observations over surfaces that are different from short (grass — ETos) or tall (alfalfa — ETrs) reference crops, often in nonagricultural settings, which carries an unknown discrepancy between assumed and actual conditions. In this study, NOAA daily ETos and ETrs maps were evaluated for accuracy, using observed data from the Texas High Plains Evapotranspiration (TXHPET) network. Daily ETos, ETrs and the climatic data (air temperature, wind speed, and solar radiation) used for calculating ETref were extracted from the NOAA maps for TXHPET locations and compared against ground measurements on reference grass surfaces. NOAA ETrefmaps generally overestimated the TXHPET observations (1.4 and 2.2 mm/day ETos and ETrs, respectively), which may be attributed to errors in the NLDAS modeled air temperature and wind speed, to which reference ETref is most sensitive. Therefore, a bias correction to NLDAS modeled air temperature and wind speed data, or adjustment to the resulting NOAA ETref, may be needed to improve the accuracy of NOAA ETref maps.
Stenz, Ulrich; Hartmann, Jens; Paffenholz, Jens-André; Neumann, Ingo
2017-08-16
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less.
Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains
USDA-ARS?s Scientific Manuscript database
The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which i...
Definition and Proposed Realization of the International Height Reference System (IHRS)
NASA Astrophysics Data System (ADS)
Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael
2017-05-01
Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.
Larmer, S G; Sargolzaei, M; Schenkel, F S
2014-05-01
Genomic selection requires a large reference population to accurately estimate single nucleotide polymorphism (SNP) effects. In some Canadian dairy breeds, the available reference populations are not large enough for accurate estimation of SNP effects for traits of interest. If marker phase is highly consistent across multiple breeds, it is theoretically possible to increase the accuracy of genomic prediction for one or all breeds by pooling several breeds into a common reference population. This study investigated the extent of linkage disequilibrium (LD) in 5 major dairy breeds using a 50,000 (50K) SNP panel and 3 of the same breeds using the 777,000 (777K) SNP panel. Correlation of pair-wise SNP phase was also investigated on both panels. The level of LD was measured using the squared correlation of alleles at 2 loci (r(2)), and the consistency of SNP gametic phases was correlated using the signed square root of these values. Because of the high cost of the 777K panel, the accuracy of imputation from lower density marker panels [6,000 (6K) or 50K] was examined both within breed and using a multi-breed reference population in Holstein, Ayrshire, and Guernsey. Imputation was carried out using FImpute V2.2 and Beagle 3.3.2 software. Imputation accuracies were then calculated as both the proportion of correct SNP filled in (concordance rate) and allelic R(2). Computation time was also explored to determine the efficiency of the different algorithms for imputation. Analysis showed that LD values >0.2 were found in all breeds at distances at or shorter than the average adjacent pair-wise distance between SNP on the 50K panel. Correlations of r-values, however, did not reach high levels (<0.9) at these distances. High correlation values of SNP phase between breeds were observed (>0.94) when the average pair-wise distances using the 777K SNP panel were examined. High concordance rate (0.968-0.995) and allelic R(2) (0.946-0.991) were found for all breeds when imputation was carried out with FImpute from 50K to 777K. Imputation accuracy for Guernsey and Ayrshire was slightly lower when using the imputation method in Beagle. Computing time was significantly greater when using Beagle software, with all comparable procedures being 9 to 13 times less efficient, in terms of time, compared with FImpute. These findings suggest that use of a multi-breed reference population might increase prediction accuracy using the 777K SNP panel and that 777K genotypes can be efficiently and effectively imputed using the lower density 50K SNP panel. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ramu, Ashok T; Mages, Phillip; Zhang, Chong; Imamura, Jeffrey T; Bowers, John E
2012-09-01
The Seebeck coefficient of a typical thermoelectric material, silicon-doped InGaAs lattice-matched to InP, is measured over a temperature range from 300 K to 550 K. By depositing and patterning a thermometric reference bar of silicon-doped InP adjacent to a bar of the material under test, temperature differences are measured directly. This is in contrast to conventional two-thermocouple techniques that subtract two large temperatures to yield a small temperature difference, a procedure prone to errors. The proposed technique retains the simple instrumentation of two-thermocouple techniques while eliminating the critical dependence of the latter on good thermal contact. The repeatability of the proposed technique is demonstrated to be ±2.6% over three temperature sweeps, while the repeatability of two-thermocouple measurements is about ±5%. The improved repeatability is significant for reliable reporting of the ZT figure of merit, which is proportional to the square of the Seebeck coefficient. The accuracy of the proposed technique depends on the accuracy with which the high-temperature Seebeck coefficient of the reference material may be computed or measured. In this work, the Seebeck coefficient of the reference material, n+ InP, is computed by rigorous solution of the Boltzmann transport equation. The accuracy and repeatability of the proposed technique can be systematically improved by scaling, and the method is easily extensible to other material systems currently being investigated for high thermoelectric energy conversion efficiency.
Qian, Shinan
2011-01-01
Nmore » anoradian Surface Profilers (SPs) are required for state-of-the-art synchrotron radiation optics and high-precision optical measurements. ano-radian accuracy must be maintained in the large-angle test range. However, the beams' notable lateral motions during tests of most operating profilers, combined with the insufficiencies of their optical components, generate significant errors of ∼ 1 μ rad rms in the measurements. The solution to nano-radian accuracy for the new generation of surface profilers in this range is to apply a scanning optical head, combined with nontilted reference beam. I describe here my comparison of different scan modes and discuss some test results.« less
Effects of a rater training on rating accuracy in a physical examination skills assessment
Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R.
2014-01-01
Background: The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Methods: Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Results: Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. Conclusions: While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters’ grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments. PMID:25489341
Effects of a rater training on rating accuracy in a physical examination skills assessment.
Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R
2014-01-01
The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters' grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not effectively adjust the physicians' judgement on medical students in real-live assessments.
Milton, Martin J T; Wang, Jian
2003-01-01
A new isotope dilution mass spectrometry (IDMS) method for high-accuracy quantitative analysis of gases has been developed and validated by the analysis of standard mixtures of carbon dioxide in nitrogen. The method does not require certified isotopic reference materials and does not require direct measurements of the highly enriched spike. The relative uncertainty of the method is shown to be 0.2%. Reproduced with the permission of Her Majesty's Stationery Office. Copyright Crown copyright 2003.
SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
2012-07-15
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range
de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander
2016-01-01
Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334
Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.
Ender, Andreas; Mehl, Albert
2013-02-01
A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Accuracy of taxonomy prediction for 16S rRNA and fungal ITS sequences
2018-01-01
Prediction of taxonomy for marker gene sequences such as 16S ribosomal RNA (rRNA) is a fundamental task in microbiology. Most experimentally observed sequences are diverged from reference sequences of authoritatively named organisms, creating a challenge for prediction methods. I assessed the accuracy of several algorithms using cross-validation by identity, a new benchmark strategy which explicitly models the variation in distances between query sequences and the closest entry in a reference database. When the accuracy of genus predictions was averaged over a representative range of identities with the reference database (100%, 99%, 97%, 95% and 90%), all tested methods had ≤50% accuracy on the currently-popular V4 region of 16S rRNA. Accuracy was found to fall rapidly with identity; for example, better methods were found to have V4 genus prediction accuracy of ∼100% at 100% identity but ∼50% at 97% identity. The relationship between identity and taxonomy was quantified as the probability that a rank is the lowest shared by a pair of sequences with a given pair-wise identity. With the V4 region, 95% identity was found to be a twilight zone where taxonomy is highly ambiguous because the probabilities that the lowest shared rank between pairs of sequences is genus, family, order or class are approximately equal. PMID:29682424
Stenz, Ulrich; Neumann, Ingo
2017-01-01
Terrestrial laser scanning (TLS) is an efficient solution to collect large-scale data. The efficiency can be increased by combining TLS with additional sensors in a TLS-based multi-sensor-system (MSS). The uncertainty of scanned points is not homogenous and depends on many different influencing factors. These include the sensor properties, referencing, scan geometry (e.g., distance and angle of incidence), environmental conditions (e.g., atmospheric conditions) and the scanned object (e.g., material, color and reflectance, etc.). The paper presents methods, infrastructure and results for the validation of the suitability of TLS and TLS-based MSS. Main aspects are the backward modelling of the uncertainty on the basis of reference data (e.g., point clouds) with superordinate accuracy and the appropriation of a suitable environment/infrastructure (e.g., the calibration process of the targets for the registration of laser scanner and laser tracker data in a common coordinate system with high accuracy) In this context superordinate accuracy means that the accuracy of the acquired reference data is better by a factor of 10 than the data of the validated TLS and TLS-based MSS. These aspects play an important role in engineering geodesy, where the aimed accuracy lies in a range of a few mm or less. PMID:28812998
Accuracy assessment of NLCD 2006 land cover and impervious surface
Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.
2013-01-01
Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.
Two laboratory methods for the calibration of GPS speed meters
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-01-01
The set-ups of two calibration systems are presented to investigate calibration methods of GPS speed meters. The GPS speed meter calibrated is a special type of high accuracy speed meter for vehicles which uses Doppler demodulation of GPS signals to calculate the measured speed of a moving target. Three experiments are performed: including simulated calibration, field-test signal replay calibration, and in-field test comparison with an optical speed meter. The experiments are conducted at specific speeds in the range of 40-180 km h-1 with the same GPS speed meter as the device under calibration. The evaluation of measurement results validates both methods for calibrating GPS speed meters. The relative deviations between the measurement results of the GPS-based high accuracy speed meter and those of the optical speed meter are analyzed, and the equivalent uncertainty of the comparison is evaluated. The comparison results justify the utilization of GPS speed meters as reference equipment if no fewer than seven satellites are available. This study contributes to the widespread use of GPS-based high accuracy speed meters as legal reference equipment in traffic speed metrology.
NASA Astrophysics Data System (ADS)
Kamal, Muhammad; Johansen, Kasper
2017-10-01
Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background Due to increasing numbers of people with diabetes taking part in extreme sports (e.g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior studies reported bias in blood glucose measurements using different BGMs at high altitude. We hypothesized that glucose-oxidase based BGMs are more influenced by the lower atmospheric oxygen pressure at altitude than glucose dehydrogenase based BGMs. Methodology/Principal Findings Glucose measurements at simulated altitude of nine BGMs (six glucose dehydrogenase and three glucose oxidase BGMs) were compared to glucose measurement on a similar BGM at sea level and to a laboratory glucose reference method. Venous blood samples of four different glucose levels were used. Moreover, two glucose oxidase and two glucose dehydrogenase based BGMs were evaluated at different altitudes on Mount Kilimanjaro. Accuracy criteria were set at a bias <15% from reference glucose (when >6.5 mmol/L) and <1 mmol/L from reference glucose (when <6.5 mmol/L). No significant difference was observed between measurements at simulated altitude and sea level for either glucose oxidase based BGMs or glucose dehydrogenase based BGMs as a group phenomenon. Two GDH based BGMs did not meet set performance criteria. Most BGMs are generally overestimating true glucose concentration at high altitude. Conclusion At simulated high altitude all tested BGMs, including glucose oxidase based BGMs, did not show influence of low atmospheric oxygen pressure. All BGMs, except for two GDH based BGMs, performed within predefined criteria. At true high altitude one GDH based BGM had best precision and accuracy. PMID:21103399
Bourier, Felix; Hessling, Gabriele; Ammar-Busch, Sonia; Kottmaier, Marc; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Semmler, Verena; Lennerz, Carsten; Schneider, Christine; Kolb, Christof; Deisenhofer, Isabel; Reents, Tilko
2016-03-01
Contact-force (CF) sensing catheters are increasingly used in clinical electrophysiological practice due to their efficacy and safety profile. As data about the accuracy of this technology are scarce, we sought to quantify accuracy based on in vitro experiments. A custom-made force sensor was constructed that allowed exact force reference measurements registered via a flexible membrane. A Smarttouch Surround Flow (ST SF) ablation catheter (Biosense Webster, Diamond Bar, CA, USA) was brought in contact with the membrane of the force sensor in order to compare the ST SF force measurements to force sensor reference measurements. ST SF force sensing technology is based on deflection registration between the distal and proximal catheter tip. The experiment was repeated for n = 10 ST SF catheters, which showed no significant difference in accuracy levels. A series of measurements (n = 1200) was carried out for different angles of force acting to the catheter tip (0°/perpendicular contact, 30°, 60°, 90°/parallel contact). The mean absolute differences between reference and ST SF measurements were 1.7 ± 1.8 g (0°), 1.6 ± 1.2 g (30°), 1.4 ± 1.3 g (60°), and 6.6 ± 5.9 g (90°). Measurement accuracy was significantly higher in non-parallel contact when compared with parallel contact (P < 0.01). Catheter force measurements using the ST SF catheters show a high level of accuracy regarding differences to reference measurements and reproducibility. The reduced accuracy in measurements of 90° acting forces (parallel contact) might be clinically important when creating, for example, linear lesions. © 2015 Wiley Periodicals, Inc.
Miskowiak, K W; Larsen, J E; Harmer, C J; Siebner, H R; Kessing, L V; Macoveanu, J; Vinberg, M
2018-01-15
Negative cognitive bias and aberrant neural processing of self-referent emotional words seem to be trait-marks of depression. However, it is unclear whether these neurocognitive changes are present in unaffected first-degree relatives and constitute an illness endophenotype. Fifty-three healthy, never-depressed monozygotic or dizygotic twins with a co-twin history of depression (high-risk group: n = 26) or no first-degree family history of depression (low-risk group: n = 27) underwent neurocognitive testing and functional magnetic imaging (fMRI) as part of a follow-up cohort study. Participants performed a self-referent emotional word categorisation task and free word recall task followed by a recognition task during fMRI. Participants also completed questionnaires assessing mood, personality traits and coping strategies. High-risk and low-risk twins (age, mean ± SD: 40 ± 11) were well-balanced for demographic variables, mood, coping and neuroticism. High-risk twins showed lower accuracy during self-referent categorisation of emotional words independent of valence and more false recollections of negative words than low-risk twins during free recall. Functional MRI yielded no differences between high-risk and low-risk twins in retrieval-specific neural activity for positive or negative words or during the recognition of negative versus positive words within the hippocampus or prefrontal cortex. The subtle display of negative recall bias is consistent with the hypothesis that self-referent negative memory bias is an endophenotype for depression. High-risk twins' lower categorisation accuracy adds to the evidence for valence-independent cognitive deficits in individuals at familial risk for depression. Copyright © 2017 Elsevier B.V. All rights reserved.
Han, Bingqing; Ge, Menglei; Zhao, Haijian; Yan, Ying; Zeng, Jie; Zhang, Tianjiao; Zhou, Weiyan; Zhang, Jiangtao; Wang, Jing; Zhang, Chuanbao
2017-11-27
Serum calcium level is an important clinical index that reflects pathophysiological states. However, detection accuracy in laboratory tests is not ideal; as such, a high accuracy method is needed. We developed a reference method for measuring serum calcium levels by isotope dilution inductively coupled plasma mass spectrometry (ID ICP-MS), using 42Ca as the enriched isotope. Serum was digested with 69% ultrapure nitric acid and diluted to a suitable concentration. The 44Ca/42Ca ratio was detected in H2 mode; spike concentration was calibrated by reverse IDMS using standard reference material (SRM) 3109a, and sample concentration was measured by a bracketing procedure. We compared the performance of ID ICP-MS with those of three other reference methods in China using the same serum and aqueous samples. The relative expanded uncertainty of the sample concentration was 0.414% (k=2). The range of repeatability (within-run imprecision), intermediate imprecision (between-run imprecision), and intra-laboratory imprecision were 0.12%-0.19%, 0.07%-0.09%, and 0.16%-0.17%, respectively, for two of the serum samples. SRM909bI, SRM909bII, SRM909c, and GBW09152 were found to be within the certified value interval, with mean relative bias values of 0.29%, -0.02%, 0.10%, and -0.19%, respectively. The range of recovery was 99.87%-100.37%. Results obtained by ID ICP-MS showed a better accuracy than and were highly correlated with those of other reference methods. ID ICP-MS is a simple and accurate candidate reference method for serum calcium measurement and can be used to establish and improve serum calcium reference system in China.
NASA Astrophysics Data System (ADS)
Rabah, Mostafa; Elmewafey, Mahmoud; Farahan, Magda H.
2016-06-01
A geodetic control network is the wire-frame or the skeleton on which continuous and consistent mapping, Geographic Information Systems (GIS), and surveys are based. Traditionally, geodetic control points are established as permanent physical monuments placed in the ground and precisely marked, located, and documented. With the development of satellite surveying methods and their availability and high degree of accuracy, a geodetic control network could be established by using GNSS and referred to an international terrestrial reference frame used as a three-dimensional geocentric reference system for a country. Based on this concept, in 1992, the Egypt Survey Authority (ESA) established two networks, namely High Accuracy Reference Network (HARN) and the National Agricultural Cadastral Network (NACN). To transfer the International Terrestrial Reference Frame to the HARN, the HARN was connected with four IGS stations. The processing results were 1:10,000,000 (Order A) for HARN and 1:1,000,000 (Order B) for NACN relative network accuracy standard between stations defined in ITRF1994 Epoch1996. Since 1996, ESA did not perform any updating or maintaining works for these networks. To see how non-performing maintenance degrading the values of the HARN and NACN, the available HARN and NACN stations in the Nile Delta were observed. The Processing of the tested part was done by CSRS-PPP Service based on utilizing Precise Point Positioning "PPP" and Trimble Business Center "TBC". The study shows the feasibility of Precise Point Positioning in updating the absolute positioning of the HARN network and its role in updating the reference frame (ITRF). The study also confirmed the necessity of the absent role of datum maintenance of Egypt networks.
Perren, Andreas; Previsdomini, Marco; Perren, Ilaria; Merlani, Paolo
2012-04-05
The nine equivalents of nursing manpower use score (NEMS) is frequently used to quantify, evaluate and allocate nursing workload at intensive care unit level. In Switzerland it has also become a key component in defining the degree of ICU hospital reimbursement. The accuracy of nurse registered NEMS scores in real life was assessed and error-prone variables were identified. In this retrospective multicentre audit three reviewers (1 nurse, 2 intensivists) independently reassessed a total of 529 NEMS scores. Correlation and agreement of the sum-scores and of the different variables among reviewers, as well as between nurses and the reviewers' reference value, were assessed (ICC, % agreement and kappa). Bland & Altman (reference value - nurses) of sum-scores and regression of the difference were determined and a logistic regression model identifying risk factors for erroneous assessments was calculated. Agreement for sum-scores among reviewers was almost perfect (mean ICC = 0.99 / significant correlation p <0.0001). The nurse registered NEMS score (mean ± SD) was 24.8 ± 8.6 points versus 24.0 ± 8.6 points (p <0.13 for difference) of the reference value, with a slightly lower ICC (0.83). The lowest agreement was found in intravenous medication (0.85). Bland & Altman was 0.84 ± 10, with a significant regression between the difference and the reference value, indicating overall an overestimation of lower scores (≤29 points) and underestimation of higher scores. Accuracy of scores or variables was not associated with nurses' characteristics. In real life, nurse registered NEMS scores are highly accurate. Lower (≤29 points) NEMS sum-scores are overestimated and higher underestimated. Accuracy of scores or variables was not associated with nurses' characteristics.
Voskoboev, Nikolay V; Cambern, Sarah J; Hanley, Matthew M; Giesen, Callen D; Schilling, Jason J; Jannetto, Paul J; Lieske, John C; Block, Darci R
2015-11-01
Validation of tests performed on body fluids other than blood or urine can be challenging due to the lack of a reference method to confirm accuracy. The aim of this study was to evaluate alternate assessments of accuracy that laboratories can rely on to validate body fluid tests in the absence of a reference method using the example of sodium (Na(+)), potassium (K(+)), and magnesium (Mg(2+)) testing in stool fluid. Validations of fecal Na(+), K(+), and Mg(2+) were performed on the Roche cobas 6000 c501 (Roche Diagnostics) using residual stool specimens submitted for clinical testing. Spiked recovery, mixing studies, and serial dilutions were performed and % recovery of each analyte was calculated to assess accuracy. Results were confirmed by comparison to a reference method (ICP-OES, PerkinElmer). Mean recoveries for fecal electrolytes were Na(+) upon spiking=92%, mixing=104%, and dilution=105%; K(+) upon spiking=94%, mixing=96%, and dilution=100%; and Mg(2+) upon spiking=93%, mixing=98%, and dilution=100%. When autoanalyzer results were compared to reference ICP-OES results, Na(+) had a slope=0.94, intercept=4.1, and R(2)=0.99; K(+) had a slope=0.99, intercept=0.7, and R(2)=0.99; and Mg(2+) had a slope=0.91, intercept=-4.6, and R(2)=0.91. Calculated osmotic gap using both methods were highly correlated with slope=0.95, intercept=4.5, and R(2)=0.97. Acid pretreatment increased magnesium recovery from a subset of clinical specimens. A combination of mixing, spiking, and dilution recovery experiments are an acceptable surrogate for assessing accuracy in body fluid validations in the absence of a reference method. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
ARCSTONE: Accurate Calibration of Lunar Spectral Reflectance from space
NASA Astrophysics Data System (ADS)
Young, C. L.; Lukashin, C.; Jackson, T.; Cooney, M.; Ryan, N.; Beverly, J.; Davis, W.; Nguyen, T.; Rutherford, G.; Swanson, R.; Kehoe, M.; Kopp, G.; Smith, P.; Woodward, J.; Carvo, J.; Stone, T.
2017-12-01
Calibration accuracy and consistency are key on-orbit performance metrics for Earth observing sensors. The accuracy and consistency of measurements across multiple instruments in low Earth and geostationary orbits are directly connected to the scientific understanding of complex systems, such as Earth's weather and climate. Recent studies have demonstrated the quantitative impacts of observational accuracy on the science data products [1] and the ability to detect climate change trends for essential climate variables (e.g., Earth's radiation budget, cloud feedback, and long-term trends in cloud parameters) [2, 3]. It is common for sensors to carry references for calibration at various wavelengths onboard, but these can be subject to degradation and increase mass and risk. The Moon can be considered a natural solar diffuser in space. Establishing the Moon as an on-orbit high-accuracy calibration reference enables broad intercalibration opportunities, as the lunar reflectance is time-invariant and can be directly measured by most Earth-observing instruments. Existing approaches to calibrate sensors against the Moon can achieve stabilities of a tenth of a percent over a decade, as demonstrated by the SeaWIFS. However, the current lunar calibration quality, with 5 - 10% bias, depends on the photometric model of the Moon [4]. Significant improvements in the lunar reference are possible and are necessary for climate-level absolute calibrations using the Moon. The ARCSTONE instrument will provide a reliable reference for high-accuracy on-orbit calibration for reflected solar instruments. An orbiting spectrometer flying on a CubeSat in low Earth orbit will provide lunar spectral reflectance with accuracy < 0.5% (k = 1), sufficient to establish an SI-traceable absolute lunar calibration standard for past, current, and future Earth weather and climate sensors. The ARCSTONE team will present the instrument design status and path forward for development, building, calibration and testing. [1] Lyapustin, A. Y. et al., 2014, Atmos. Meas. Tech., 7, pp. 4353 - 4365. [2] Wielicki, B. A., et al., 2013, Bull. Amer. Meteor. Soc., 94, pp. 1519 - 1539. [3] Shea, Y. L., et al., 2017 J. of Climate. [4] Kieffer, H. H., et al., 2005, The Astronomical J., v. 129, pp. 2887 - 2901.
Frampton, G K; Kalita, N; Payne, L; Colquitt, J L; Loveman, E; Downes, S M; Lotery, A J
2017-07-01
We conducted a systematic review of the accuracy of fundus autofluorescence (FAF) imaging for diagnosing and monitoring retinal conditions. Searches in November 2014 identified English language references. Sources included MEDLINE, EMBASE, the Cochrane Library, Web of Science, and MEDION databases; reference lists of retrieved studies; and internet pages of relevant organisations, meetings, and trial registries. For inclusion, studies had to report FAF imaging accuracy quantitatively. Studies were critically appraised using QUADAS risk of bias criteria. Two reviewers conducted all review steps. From 2240 unique references identified, eight primary research studies met the inclusion criteria. These investigated diagnostic accuracy of FAF imaging for choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies), and diabetic macular oedema (two studies). Diagnostic sensitivity of FAF imaging ranged from 32 to 100% and specificity from 34 to 100%. However, owing to methodological limitations, including high and/or unclear risks of bias, none of these studies provides conclusive evidence of the diagnostic accuracy of FAF imaging. Study heterogeneity precluded meta-analysis. In most studies, the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported whether FAF images were interpreted consistently. No studies of monitoring accuracy were identified. An update in October 2016, based on MEDLINE and internet searches, identified four new studies but did not alter our conclusions. Robust quantitative evidence on the accuracy of FAF imaging and how FAF images are interpreted is lacking. We provide recommendations to address this.
A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges
NASA Astrophysics Data System (ADS)
Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.
2012-04-01
In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site located within the Hong Kong International Airport (HKIA) were aggregated at a 1-minute scale and used as reference for the artificial rain generation (Colli et al., 2012). The preliminary development and validation of the rainfall simulator for the generation of variable time steps reference intensities is also shown. The generator is characterized by a sufficiently short time response with respect to the expected weighing gauges behavior in order to ensure effective comparison of the measured/reference intensity at very high resolution in time.
Remote sensing and the Mississippi high accuracy reference network
NASA Technical Reports Server (NTRS)
Mick, Mark; Alexander, Timothy M.; Woolley, Stan
1994-01-01
Since 1986, NASA's Commercial Remote Sensing Program (CRSP) at Stennis Space Center has supported commercial remote sensing partnerships with industry. CRSP's mission is to maximize U.S. market exploitation of remote sensing and related space-based technologies and to develop advanced technical solutions for spatial information requirements. Observation, geolocation, and communications technologies are converging and their integration is critical to realize the economic potential for spatial informational needs. Global positioning system (GPS) technology enables a virtual revolution in geopositionally accurate remote sensing of the earth. A majority of states are creating GPS-based reference networks, or high accuracy reference networks (HARN). A HARN can be defined for a variety of local applications and tied to aerial or satellite observations to provide an important contribution to geographic information systems (GIS). This paper details CRSP's experience in the design and implementation of a HARN in Mississippi and the design and support of future applications of integrated earth observations, geolocation, and communications technology.
Establishing a celestial VLBI reference frame. 1: Searching for VLBI sources
NASA Technical Reports Server (NTRS)
Preston, R. A.; Morabito, D. D.; Williams, J. G.; Slade, M. A.; Harris, A. W.; Finley, S. G.; Skjerve, L. J.; Tanida, L.; Spitzmesser, D. J.; Johnson, B.
1978-01-01
The Deep Space Network is currently engaged in establishing a new high-accuracy VLBI celestial reference frame. The present status of the task of finding suitable celestial radio sources for constructing this reference frame is discussed. To date, 564 VLBI sources were detected, with 166 of these lying within 10 deg of the ecliptic plane. The variation of the sky distribution of these sources with source strength is examined.
Diagnostic accuracy of high-definition CT coronary angiography in high-risk patients.
Iyengar, S S; Morgan-Hughes, G; Ukoumunne, O; Clayton, B; Davies, E J; Nikolaou, V; Hyde, C J; Shore, A C; Roobottom, C A
2016-02-01
To assess the diagnostic accuracy of computed tomography coronary angiography (CTCA) using a combination of high-definition CT (HD-CTCA) and high level of reader experience, with invasive coronary angiography (ICA) as the reference standard, in high-risk patients for the investigation of coronary artery disease (CAD). Three hundred high-risk patients underwent HD-CTCA and ICA. Independent experts evaluated the images for the presence of significant CAD, defined primarily as the presence of moderate (≥ 50%) stenosis and secondarily as the presence of severe (≥ 70%) stenosis in at least one coronary segment, in a blinded fashion. HD-CTCA was compared to ICA as the reference standard. No patients were excluded. Two hundred and six patients (69%) had moderate and 178 (59%) had severe stenosis in at least one vessel at ICA. The sensitivity, specificity, positive predictive value, and negative predictive value were 97.1%, 97.9%, 99% and 93.9% for moderate stenosis, and 98.9%, 93.4%, 95.7% and 98.3%, for severe stenosis, on a per-patient basis. The combination of HD-CTCA and experienced readers applied to a high-risk population, results in high diagnostic accuracy comparable to ICA. Modern generation CT systems in experienced hands might be considered for an expanded role. Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Nelson, Sarah C.; Stilp, Adrienne M.; Papanicolaou, George J.; Taylor, Kent D.; Rotter, Jerome I.; Thornton, Timothy A.; Laurie, Cathy C.
2016-01-01
Imputation is commonly used in genome-wide association studies to expand the set of genetic variants available for analysis. Larger and more diverse reference panels, such as the final Phase 3 of the 1000 Genomes Project, hold promise for improving imputation accuracy in genetically diverse populations such as Hispanics/Latinos in the USA. Here, we sought to empirically evaluate imputation accuracy when imputing to a 1000 Genomes Phase 3 versus a Phase 1 reference, using participants from the Hispanic Community Health Study/Study of Latinos. Our assessments included calculating the correlation between imputed and observed allelic dosage in a subset of samples genotyped on a supplemental array. We observed that the Phase 3 reference yielded higher accuracy at rare variants, but that the two reference panels were comparable at common variants. At a sample level, the Phase 3 reference improved imputation accuracy in Hispanic/Latino samples from the Caribbean more than for Mainland samples, which we attribute primarily to the additional reference panel samples available in Phase 3. We conclude that a 1000 Genomes Project Phase 3 reference panel can yield improved imputation accuracy compared with Phase 1, particularly for rare variants and for samples of certain genetic ancestry compositions. Our findings can inform imputation design for other genome-wide association studies of participants with diverse ancestries, especially as larger and more diverse reference panels continue to become available. PMID:27346520
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
Wang, Hubiao; Wu, Lin; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-08-10
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°-145° E, 0°-40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China's Western Pacific area is ~1.0-4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy.
Wang, Hubiao; Chai, Hua; Xiao, Yaofei; Hsu, Houtse; Wang, Yong
2017-01-01
The variation of a marine gravity anomaly reference map is one of the important factors that affect the location accuracy of INS/Gravity integrated navigation systems in underwater navigation. In this study, based on marine gravity anomaly reference maps, new characteristic parameters of the gravity anomaly were constructed. Those characteristic values were calculated for 13 zones (105°–145° E, 0°–40° N) in the Western Pacific area, and simulation experiments of gravity matching-aided navigation were run. The influence of gravity variations on the accuracy of gravity matching-aided navigation was analyzed, and location accuracy of gravity matching in different zones was determined. Studies indicate that the new parameters may better characterize the marine gravity anomaly. Given the precision of current gravimeters and the resolution and accuracy of reference maps, the location accuracy of gravity matching in China’s Western Pacific area is ~1.0–4.0 nautical miles (n miles). In particular, accuracy in regions around the South China Sea and Sulu Sea was the highest, better than 1.5 n miles. The gravity characteristic parameters identified herein and characteristic values calculated in various zones provide a reference for the selection of navigation area and planning of sailing routes under conditions requiring certain navigational accuracy. PMID:28796158
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Error and Uncertainty in the Accuracy Assessment of Land Cover Maps
NASA Astrophysics Data System (ADS)
Sarmento, Pedro Alexandre Reis
Traditionally the accuracy assessment of land cover maps is performed through the comparison of these maps with a reference database, which is intended to represent the "real" land cover, being this comparison reported with the thematic accuracy measures through confusion matrixes. Although, these reference databases are also a representation of reality, containing errors due to the human uncertainty in the assignment of the land cover class that best characterizes a certain area, causing bias in the thematic accuracy measures that are reported to the end users of these maps. The main goal of this dissertation is to develop a methodology that allows the integration of human uncertainty present in reference databases in the accuracy assessment of land cover maps, and analyse the impacts that uncertainty may have in the thematic accuracy measures reported to the end users of land cover maps. The utility of the inclusion of human uncertainty in the accuracy assessment of land cover maps is investigated. Specifically we studied the utility of fuzzy sets theory, more precisely of fuzzy arithmetic, for a better understanding of human uncertainty associated to the elaboration of reference databases, and their impacts in the thematic accuracy measures that are derived from confusion matrixes. For this purpose linguistic values transformed in fuzzy intervals that address the uncertainty in the elaboration of reference databases were used to compute fuzzy confusion matrixes. The proposed methodology is illustrated using a case study in which the accuracy assessment of a land cover map for Continental Portugal derived from Medium Resolution Imaging Spectrometer (MERIS) is made. The obtained results demonstrate that the inclusion of human uncertainty in reference databases provides much more information about the quality of land cover maps, when compared with the traditional approach of accuracy assessment of land cover maps. None
Wright, Alexis A; Wassinger, Craig A; Frank, Mason; Michener, Lori A; Hegedus, Eric J
2013-09-01
To systematically review and critique the evidence regarding the diagnostic accuracy of physical examination tests for the scapula in patients with shoulder disorders. A systematic, computerised literature search of PubMED, EMBASE, CINAHL and the Cochrane Library databases (from database inception through January 2012) using keywords related to diagnostic accuracy of physical examination tests of the scapula. The Quality Assessment of Diagnostic Accuracy Studies tool was used to critique the quality of each paper. Eight articles met the inclusion criteria; three were considered to be of high quality. Of the three high-quality studies, two were in reference to a 'diagnosis' of shoulder pain. Only one high-quality article referenced specific shoulder pathology of acromioclavicular dislocation with reported sensitivity of 71% and 41% for the scapular dyskinesis and SICK scapula test, respectively. Overall, no physical examination test of the scapula was found to be useful in differentially diagnosing pathologies of the shoulder.
Kawaguchi, Migaku; Takatsu, Akiko
2009-08-01
A candidate reference measurement procedure involving isotope dilution coupled with gas chromatography-mass spectrometry (GC-MS) has been developed and critically evaluated. An isotopically labeled internal standard, cortisol-d(2), was added to a serum sample. After equilibration, solid-phase extractions (SPE) for sample preparation and derivatization with heptafluorobutyric anhydride (HFBA) were performed for GC-MS analysis. The limit of detection (LOD) and the limit of quantification (LOQ) were 5 and 20 ng g(-1), respectively. The recovery of the added cortisol ranged from 99.8 to 101.0%. Excellent precision was obtained with a within-day variation (RSD) of 0.7% for GC-MS analysis. The accuracy of the measurement was evaluated by comparing of results of this reference measurement procedure on lyophilized human serum reference materials for cortisol (European Reference Materials (ERM)-DA 192) as Certified Reference Materials (CRMs). The results of this method for total cortisol agreed with the certified values within some uncertainty. This method, which demonstrates simply, easy, good accuracy, high precision, and is free from interferences from structural analogues, qualifies as a reference measurement procedure.
Pembleton, Luke W; Inch, Courtney; Baillie, Rebecca C; Drayton, Michelle C; Thakur, Preeti; Ogaji, Yvonne O; Spangenberg, German C; Forster, John W; Daetwyler, Hans D; Cogan, Noel O I
2018-06-02
Exploitation of data from a ryegrass breeding program has enabled rapid development and implementation of genomic selection for sward-based biomass yield with a twofold-to-threefold increase in genetic gain. Genomic selection, which uses genome-wide sequence polymorphism data and quantitative genetics techniques to predict plant performance, has large potential for the improvement in pasture plants. Major factors influencing the accuracy of genomic selection include the size of reference populations, trait heritability values and the genetic diversity of breeding populations. Global diversity of the important forage species perennial ryegrass is high and so would require a large reference population in order to achieve moderate accuracies of genomic selection. However, diversity of germplasm within a breeding program is likely to be lower. In addition, de novo construction and characterisation of reference populations are a logistically complex process. Consequently, historical phenotypic records for seasonal biomass yield and heading date over a 18-year period within a commercial perennial ryegrass breeding program have been accessed, and target populations have been characterised with a high-density transcriptome-based genotyping-by-sequencing assay. Ability to predict observed phenotypic performance in each successive year was assessed by using all synthetic populations from previous years as a reference population. Moderate and high accuracies were achieved for the two traits, respectively, consistent with broad-sense heritability values. The present study represents the first demonstration and validation of genomic selection for seasonal biomass yield within a diverse commercial breeding program across multiple years. These results, supported by previous simulation studies, demonstrate the ability to predict sward-based phenotypic performance early in the process of individual plant selection, so shortening the breeding cycle, increasing the rate of genetic gain and allowing rapid adoption in ryegrass improvement programs.
Validation assessment of shoreline extraction on medium resolution satellite image
NASA Astrophysics Data System (ADS)
Manaf, Syaifulnizam Abd; Mustapha, Norwati; Sulaiman, Md Nasir; Husin, Nor Azura; Shafri, Helmi Zulhaidi Mohd
2017-10-01
Monitoring coastal zones helps provide information about the conditions of the coastal zones, such as erosion or accretion. Moreover, monitoring the shorelines can help measure the severity of such conditions. Such measurement can be performed accurately by using Earth observation satellite images rather than by using traditional ground survey. To date, shorelines can be extracted from satellite images with a high degree of accuracy by using satellite image classification techniques based on machine learning to identify the land and water classes of the shorelines. In this study, the researchers validated the results of extracted shorelines of 11 classifiers using a reference shoreline provided by the local authority. Specifically, the validation assessment was performed to examine the difference between the extracted shorelines and the reference shorelines. The research findings showed that the SVM Linear was the most effective image classification technique, as evidenced from the lowest mean distance between the extracted shoreline and the reference shoreline. Furthermore, the findings showed that the accuracy of the extracted shoreline was not directly proportional to the accuracy of the image classification.
Using Vision Metrology System for Quality Control in Automotive Industries
NASA Astrophysics Data System (ADS)
Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.
2012-07-01
The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.
SIRGAS: the core geodetic infrastructure in Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Sanchez, L.; Brunini, C.; Drewes, H.; Mackern, V.; da Silva, A.
2013-05-01
Studying, understanding, and modelling geophysical phenomena, such as global change and geodynamics, require geodetic reference frames with (1) an order of accuracy higher than the magnitude of the effects we want to study, (2) consistency and reliability worldwide (the same accuracy everywhere), and (3) a long-term stability (the same order of accuracy at any time). The definition, realisation, maintenance, and wide-utilisation of the International Terrestrial Reference System (ITRS) are oriented to guarantee a globally unified geometric reference frame with reliability at the mm-level, i.e. the International Terrestrial Reference Frame (ITRF). The densification of the global ITRF in Latin America and The Caribbean is given by SIRGAS (Sistema de Referencia Geocéntrico para Las Américas), primary objective of which is to provide the most precise coordinates in the region. Therefore, SIRGAS is the backbone for all regional projects based on the generation, use, and analysis of geo-referenced data at national as well as at international level. Besides providing the reference for a wide range of scientific applications such as the monitoring of Earth's crust deformations, vertical movements, sea level variations, atmospheric studies, etc., SIRGAS is also the platform for practical applications such as engineering projects, digital administration of geographical data, geospatial data infrastructures, etc. According to this, the present contribution describes the main features of SIRGAS, giving special care to those challenges faced to continue providing the best possible, long-term stable and high-precise reference frame for Latin America and the Caribbean.
NASA Astrophysics Data System (ADS)
Nahmani, S.; Coulot, D.; Biancale, R.; Bizouard, C.; Bonnefond, P.; Bouquillon, S.; Collilieux, X.; Deleflie, F.; Garayt, B.; Lambert, S. B.; Laurent-Varin, S.; Marty, J. C.; Mercier, F.; Metivier, L.; Meyssignac, B.; Pollet, A.; Rebischung, P.; Reinquin, F.; Richard, J. Y.; Tertre, F.; Woppelmann, G.
2017-12-01
Many major indicators of climate change are monitored with space observations. This monitoring is highly dependent on references that only geodesy can provide. The current accuracy of these references does not permit to fully support the challenges that the constantly evolving Earth system gives rise to, and can consequently limit the accuracy of these indicators. Thus, in the framework of the GGOS, stringent requirements are fixed to the International Terrestrial Reference Frame (ITRF) for the next decade: an accuracy at the level of 1 mm and a stability at the level of 0.1 mm/yr. This means an improvement of the current quality of ITRF by a factor of 5-10. Improving the quality of the geodetic references is an issue which requires a thorough reassessment of the methodologies involved. The most relevant and promising method to improve this quality is the direct combination of the space-geodetic measurements used to compute the official references of the IERS. The GEODESIE project aims at (i) determining highly-accurate global and consistent references and (ii) providing the geophysical and climate research communities with these references, for a better estimation of geocentric sea level rise, ice mass balance and on-going climate changes. Time series of sea levels computed from altimetric data and tide gauge records with these references will also be provided. The geodetic references will be essential bases for Earth's observation and monitoring to support the challenges of the century. The geocentric time series of sea levels will permit to better apprehend (i) the drivers of the global mean sea level rise and of regional variations of sea level and (ii) the contribution of the global climate change induced by anthropogenic greenhouse gases emissions to these drivers. All the results and computation and quality assessment reports will be available at geodesie_anr.ign.fr.This project, supported by the French Agence Nationale de la Recherche (ANR) for the period 2017-2020, will be an unprecedented opportunity to provide the French Groupe de Recherche de Géodésie Spatiale (GRGS) with complete simulation and data processing capabilities to prepare the future arrival of space missions such as the European Geodetic Reference Antenna in SPace (E-GRASP) and to significantly contribute to the GGOS with accurate references.
NASA Astrophysics Data System (ADS)
Krzan, Grzegorz; Stępniak, Katarzyna
2017-09-01
In high-accuracy positioning using GNSS, the most common solution is still relative positioning using double-difference observations of dual-frequency measurements. An increasingly popular alternative to relative positioning are undifferenced approaches, which are designed to make full use of modern satellite systems and signals. Positions referenced to global International Terrestrial Reference Frame (ITRF2008) obtained from Precise Point Positioning (PPP) or Undifferenced (UD) network solutions have to be transformed to national (regional) reference frame, which introduces additional bases related to the transformation process. In this paper, satellite observations from two test networks using different observation time series were processed. The first test concerns the positioning accuracy from processing one year of dual-frequency GPS observations from 14 EUREF Permanent Network (EPN) stations using NAPEOS 3.3.1 software. The results were transformed into a national reference frame (PL-ETRF2000) and compared to positions from an EPN cumulative solution, which was adopted as the true coordinates. Daily observations were processed using PPP and UD multi-station solutions to determine the final accuracy resulting from satellite positioning, the transformation to national coordinate systems and Eurasian intraplate plate velocities. The second numerical test involved similar processing strategies of post-processing carried out using different observation time series (30 min., 1 hour, 2 hours, daily) and different classes of GNSS receivers. The centimeter accuracy of results presented in the national coordinate system satisfies the requirements of many surveying and engineering applications.
Accuracy of five intraoral scanners compared to indirect digitalization.
Güth, Jan-Frederik; Runkel, Cornelius; Beuer, Florian; Stimmelmayr, Michael; Edelhoff, Daniel; Keul, Christine
2017-06-01
Direct and indirect digitalization offer two options for computer-aided design (CAD)/ computer-aided manufacturing (CAM)-generated restorations. The aim of this study was to evaluate the accuracy of different intraoral scanners and compare them to the process of indirect digitalization. A titanium testing model was directly digitized 12 times with each intraoral scanner: (1) CS 3500 (CS), (2) Zfx Intrascan (ZFX), (3) CEREC AC Bluecam (BLU), (4) CEREC AC Omnicam (OC) and (5) True Definition (TD). As control, 12 polyether impressions were taken and the referring plaster casts were digitized indirectly with the D-810 laboratory scanner (CON). The accuracy (trueness/precision) of the datasets was evaluated by an analysing software (Geomagic Qualify 12.1) using a "best fit alignment" of the datasets with a highly accurate reference dataset of the testing model, received from industrial computed tomography. Direct digitalization using the TD showed the significant highest overall "trueness", followed by CS. Both performed better than CON. BLU, ZFX and OC showed higher differences from the reference dataset than CON. Regarding the overall "precision", the CS 3500 intraoral scanner and the True Definition showed the best performance. CON, BLU and OC resulted in significantly higher precision than ZFX did. Within the limitations of this in vitro study, the accuracy of the ascertained datasets was dependent on the scanning system. The direct digitalization was not superior to indirect digitalization for all tested systems. Regarding the accuracy, all tested intraoral scanning technologies seem to be able to reproduce a single quadrant within clinical acceptable accuracy. However, differences were detected between the tested systems.
2003-01-01
Data are not readily available on the accuracy of one of the most commonly used home blood glucose meters, the One Touch Ultra (LifeScan, Milpitas, California). The purpose of this report is to provide information on the accuracy of this home glucose meter in children with type 1 diabetes. During a 24-h clinical research center stay, the accuracy of the Ultra meter was assessed in 91 children, 3-17 years old, with type 1 diabetes by comparing the Ultra glucose values with concurrent reference serum glucose values measured in a central laboratory. The Pearson correlation between the 2,068 paired Ultra and reference values was 0.97, with the median relative absolute difference being 6%. Ninety-four percent of all Ultra values (96% of venous and 84% of capillary samples) met the proposed International Organisation for Standardisation (ISO) standard for instruments used for self-monitoring of glucose when compared with venous reference values. Ninety-nine percent of values were in zones A + B of the Modified Error Grid. A high degree of accuracy was seen across the full range of glucose values. For 353 data points during an insulin-induced hypoglycemia test, the Ultra meter was found to have accuracy that was comparable to concurrently used benchmark instruments (Beckman, YSI, or i-STAT); 95% and 96% of readings from the Ultra meter and the benchmark instruments met the proposed ISO criteria, respectively. These results confirm that the One Touch Ultra meter provides accurate glucose measurements for both hypoglycemia and hyperglycemia in children with type 1 diabetes.
Wang, Lin; Hui, Stanley Sai-chuen
2015-08-20
Various body weight and height-based references are used to define obese children and adolescents. However, no study investigating the diagnostic accuracies of the definitions of obesity and overweight in Hong Kong Chinese children and adolescents has been conducted. The current study aims to investigate the diagnostic accuracy of BMI-based definitions and 1993 HK reference in screening excess body fat among Hong Kong Chinese children and adolescents. A total of 2,134 participants (1,135 boys and 999 girls) were recruited from local schools. The foot-to-foot BIA scale was applied to assess %BF using standard methods. The criterion of childhood obesity (i.e., overfat) was defined as over 25 %BF for boys and over 30 %BF for girls. Childhood obesity was also determined from four BMI-based references and the 1993 HK reference. The diagnostic accuracy of these existing definitions for childhood obesity in screening excess body fat was evaluated using diagnostic indices. Overall, %BF was significantly correlated with anthropometry measurements in both genders (in boys, r = 0.747 for BMI 0.766 for PWH; in girls, r = 0.930 for BMI 0.851 for PWH). The prevalence rates of overweight and obesity determined by BMI-based references were similar with the prevalence rates of obesity in the 1993 HK reference in both genders. All definitions for childhood obesity showed low sensitivity (in boys, 0.325-0.761; in girls, 0.128-0.588) in detecting overfat. Specificities were high for cut-offs among all definitions for childhood obesity (in boys, 0.862-0.980; in girls, 0.973-0.998). In conclusion, prevalence rates of childhood obesity or overweight varied widely according to the diagnostic references applied. The diagnostic performance for weight and height-based references for obesity is poorer than expected for both genders among Hong Kong Chinese children and adolescents. In order to improve the diagnosis accuracy of childhood obesity, either cut-off values of body weight and height-based definitions of childhood obesity should be revised to increase the sensitivity or the possibility of using other indirect methods of estimating the %BF should be explored.
McIntyre, Timothy J.
1994-01-01
A system and method for generating a desired displacement of an object, i.e., a target, from a reference position with ultra-high accuracy utilizes a Fabry-Perot etalon having an expandable tube cavity for resolving, with an Iodine stabilized laser, displacements with high accuracy and for effecting (as an actuator) displacements of the target. A mechanical amplifier in the form of a micropositioning stage has a platform and a frame which are movable relative to one another, and the tube cavity of the etalon is connected between the platform and frame so that an adjustment in length of the cavity effects a corresponding, amplified movement of the frame relative to the cavity. Therefore, in order to provide a preselected magnitude of displacement of the stage frame relative to the platform, the etalon tube cavity is adjusted in length by a corresponding amount. The system and method are particularly well-suited for use when calibrating a high accuracy measuring device.
Fusar-Poli, Paolo; Cappucciati, Marco; Rutigliano, Grazia; Schultze-Lutter, Frauke; Bonoldi, Ilaria; Borgwardt, Stefan; Riecher-Rössler, Anita; Addington, Jean; Perkins, Diana; Woods, Scott W; McGlashan, Thomas H; Lee, Jimmy; Klosterkötter, Joachim; Yung, Alison R; McGuire, Philip
2015-01-01
An accurate detection of individuals at clinical high risk (CHR) for psychosis is a prerequisite for effective preventive interventions. Several psychometric interviews are available, but their prognostic accuracy is unknown. We conducted a prognostic accuracy meta-analysis of psychometric interviews used to examine referrals to high risk services. The index test was an established CHR psychometric instrument used to identify subjects with and without CHR (CHR+ and CHR−). The reference index was psychosis onset over time in both CHR+ and CHR− subjects. Data were analyzed with MIDAS (STATA13). Area under the curve (AUC), summary receiver operating characteristic curves, quality assessment, likelihood ratios, Fagan’s nomogram and probability modified plots were computed. Eleven independent studies were included, with a total of 2,519 help-seeking, predominately adult subjects (CHR+: N=1,359; CHR−: N=1,160) referred to high risk services. The mean follow-up duration was 38 months. The AUC was excellent (0.90; 95% CI: 0.87-0.93), and comparable to other tests in preventive medicine, suggesting clinical utility in subjects referred to high risk services. Meta-regression analyses revealed an effect for exposure to antipsychotics and no effects for type of instrument, age, gender, follow-up time, sample size, quality assessment, proportion of CHR+ subjects in the total sample. Fagan’s nomogram indicated a low positive predictive value (5.74%) in the general non-help-seeking population. Albeit the clear need to further improve prediction of psychosis, these findings support the use of psychometric prognostic interviews for CHR as clinical tools for an indicated prevention in subjects seeking help at high risk services worldwide. PMID:26407788
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
Matsunami, Risë K; Angelides, Kimon; Engler, David A
2015-05-18
There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using (13)C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and (13)C6-glucose. The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. © 2015 Diabetes Technology Society.
Matsunami, Risë K.; Angelides, Kimon; Engler, David A.
2015-01-01
Background: There is currently considerable discussion about the accuracy of blood glucose concentrations determined by personal blood glucose monitoring systems (BGMS). To date, the FDA has allowed new BGMS to demonstrate accuracy in reference to other glucose measurement systems that use the same or similar enzymatic-based methods to determine glucose concentration. These types of reference measurement procedures are only comparative in nature and are subject to the same potential sources of error in measurement and system perturbations as the device under evaluation. It would be ideal to have a completely orthogonal primary method that could serve as a true standard reference measurement procedure for establishing the accuracy of new BGMS. Methods: An isotope-dilution liquid chromatography/mass spectrometry (ID-UPLC-MRM) assay was developed using 13C6-glucose as a stable isotope analogue to specifically measure glucose concentration in human plasma, and validated for use against NIST standard reference materials, and against fresh isolates of whole blood and plasma into which exogenous glucose had been spiked. Assay performance was quantified to NIST-traceable dry weight measures for both glucose and 13C6-glucose. Results: The newly developed assay method was shown to be rapid, highly specific, sensitive, accurate, and precise for measuring plasma glucose levels. The assay displayed sufficient dynamic range and linearity to measure across the range of both normal and diabetic blood glucose levels. Assay performance was measured to within the same uncertainty levels (<1%) as the NIST definitive method for glucose measurement in human serum. Conclusions: The newly developed ID UPLC-MRM assay can serve as a validated reference measurement procedure to which new BGMS can be assessed for glucose measurement performance. PMID:25986627
NASA Astrophysics Data System (ADS)
Howat, I.; Noh, M. J.; Porter, C. C.; Smith, B. E.; Morin, P. J.
2017-12-01
We are creating the Reference Elevation Model of Antarctica (REMA), a continuous, high resolution (2-8 m), high precision (accuracy better than 1 m) reference surface for a wide range of glaciological and geodetic applications. REMA will be constructed from stereo-photogrammetric Digital Surface Models (DSM) extracted from pairs of submeter resolution DigitalGlobe satellite imagery and vertically registred to precise elevations from near-coincident airborne LiDAR, ground-based GPS surveys and Cryosat-2 radar altimetry. Both a seamless mosaic and individual, time-stamped DSM strips, collected primarily between 2012 and 2016, will be distributed to enable change measurement. These data will be used for mapping bed topography from ice thickness, measuring ice thickness changes, constraining ice flow and geodynamic models, mapping glacial geomorphology, terrain corrections and filtering of remote sensing observations, and many other science tasks. Is will also be critical for mapping ice traverse routes, landing sites and other field logistics planning. REMA will also provide a critical elevation benchmark for future satellite altimetry missions including ICESat-2. Here we report on REMA production progress, initial accuracy assessment and data availability.
NASA Technical Reports Server (NTRS)
Craft, D. William
1992-01-01
A facility for the precise calibration of mass fuel flowmeters and turbine flowmeters located at AMETEK Aerospace Products Inc., Wilmington, Massachusetts is described. This facility is referred to as the Test and Calibration System (TACS). It is believed to be the most accurate test facility available for the calibration of jet engine fuel density measurement. The product of the volumetric flow rate measurement and the density measurement, results in a true mass flow rate determination. A dual-turbine flowmeter was designed during this program. The dual-turbine flowmeter was calibrated on the TACS to show the characteristics of this type of flowmeter. An angular momentum flowmeter was also calibrated on the TACS to demonstrate the accuracy of a true mass flowmeter having a 'state-of-the-art' design accuracy.
Saini, V.; Riekerink, R. G. M. Olde; McClure, J. T.; Barkema, H. W.
2011-01-01
Determining the accuracy and precision of a measuring instrument is pertinent in antimicrobial susceptibility testing. This study was conducted to predict the diagnostic accuracy of the Sensititre MIC mastitis panel (Sensititre) and agar disk diffusion (ADD) method with reference to the manual broth microdilution test method for antimicrobial resistance profiling of Escherichia coli (n = 156), Staphylococcus aureus (n = 154), streptococcal (n = 116), and enterococcal (n = 31) bovine clinical mastitis isolates. The activities of ampicillin, ceftiofur, cephalothin, erythromycin, oxacillin, penicillin, the penicillin-novobiocin combination, pirlimycin, and tetracycline were tested against the isolates. Diagnostic accuracy was determined by estimating the area under the receiver operating characteristic curve; intertest essential and categorical agreements were determined as well. Sensititre and the ADD method demonstrated moderate to highly accurate (71 to 99%) and moderate to perfect (71 to 100%) predictive accuracies for 74 and 76% of the isolate-antimicrobial MIC combinations, respectively. However, the diagnostic accuracy was low for S. aureus-ceftiofur/oxacillin combinations and other streptococcus-ampicillin combinations by either testing method. Essential agreement between Sensititre automatic MIC readings and MIC readings obtained by the broth microdilution test method was 87%. Essential agreement between Sensititre automatic and manual MIC reading methods was 97%. Furthermore, the ADD test method and Sensititre MIC method exhibited 92 and 91% categorical agreement (sensitive, intermediate, resistant) of results, respectively, compared with the reference method. However, both methods demonstrated lower agreement for E. coli-ampicillin/cephalothin combinations than for Gram-positive isolates. In conclusion, the Sensititre and ADD methods had moderate to high diagnostic accuracy and very good essential and categorical agreement for most udder pathogen-antimicrobial combinations and can be readily employed in veterinary diagnostic laboratories. PMID:21270215
Aithal, Venkatesh; Kei, Joseph; Driscoll, Carlie; Murakoshi, Michio; Wada, Hiroshi
2018-02-01
Diagnosing conductive conditions in newborns is challenging for both audiologists and otolaryngologists. Although high-frequency tympanometry (HFT), acoustic stapedial reflex tests, and wideband absorbance measures are useful diagnostic tools, there is performance measure variability in their detection of middle ear conditions. Additional diagnostic sensitivity and specificity measures gained through new technology such as sweep frequency impedance (SFI) measures may assist in the diagnosis of middle ear dysfunction in newborns. The purpose of this study was to determine the test performance of SFI to predict the status of the outer and middle ear in newborns against commonly used reference standards. Automated auditory brainstem response (AABR), HFT (1000 Hz), transient evoked otoacoustic emission (TEOAE), distortion product otoacoustic emission (DPOAE), and SFI tests were administered to the study sample. A total of 188 neonates (98 males and 90 females) with a mean gestational age of 39.4 weeks were included in the sample. Mean age at the time of testing was 44.4 hr. Diagnostic accuracy of SFI was assessed in terms of its ability to identify conductive conditions in neonates when compared with nine different reference standards (including four single tests [AABR, HFT, TEOAE, and DPOAE] and five test batteries [HFT + DPOAE, HFT + TEOAE, DPOAE + TEOAE, DPOAE + AABR, and TEOAE + AABR]), using receiver operating characteristic (ROC) analysis and traditional test performance measures such as sensitivity and specificity. The test performance of SFI against the test battery reference standard of HFT + DPOAE and single reference standard of HFT was high with an area under the ROC curve (AROC) of 0.87 and 0.82, respectively. Although the HFT + DPOAE test battery reference standard performed better than the HFT reference standard in predicting middle ear conductive conditions in neonates, the difference in AROC was not significant. Further analysis revealed that the highest sensitivity and specificity for SFI (86% and 88%, respectively) was obtained when compared with the reference standard of HFT + DPOAE. Among the four single reference standards, SFI had the highest sensitivity and specificity (76% and 88%, respectively) when compared against the HFT reference standard. The high test performance of SFI against the HFT and HFT + DPOAE reference standards indicates that the SFI measure has appropriate diagnostic accuracy in detection of conductive conditions in newborns. Hence, the SFI test could be used as adjunct tool to identify conductive conditions in universal newborn hearing screening programs, and can also be used in diagnostic follow-up assessments. American Academy of Audiology
Gurney, J C; Ansari, E; Harle, D; O'Kane, N; Sagar, R V; Dunne, M C M
2018-02-09
To determine the accuracy of a Bayesian learning scheme (Bayes') applied to the prediction of clinical decisions made by specialist optometrists in relation to the referral refinement of chronic open angle glaucoma. This cross-sectional observational study involved collection of data from the worst affected or right eyes of a consecutive sample of cases (n = 1,006) referred into the West Kent Clinical Commissioning Group Community Ophthalmology Team (COT) by high street optometrists. Multilevel classification of each case was based on race, sex, age, family history of chronic open angle glaucoma, reason for referral, Goldmann Applanation Tonometry (intraocular pressure and interocular asymmetry), optic nerve head assessment (vertical size, cup disc ratio and interocular asymmetry), central corneal thickness and visual field analysis (Hodapp-Parrish-Anderson classification). Randomised stratified tenfold cross-validation was applied to determine the accuracy of Bayes' by comparing its output to the clinical decisions of three COT specialist optometrists; namely, the decision to discharge, follow-up or refer each case. Outcomes of cross-validation, expressed as means and standard deviations, showed that the accuracy of Bayes' was high (95%, 2.0%) but that it falsely discharged (3.4%, 1.6%) or referred (3.1%, 1.5%) some cases. The results indicate that Bayes' has the potential to augment the decisions of specialist optometrists.
Accuracy of references and quotations in veterinary journals.
Hinchcliff, K W; Bruce, N J; Powers, J D; Kipp, M L
1993-02-01
The accuracy of references and quotations used to substantiate statements of fact in articles published in 6 frequently cited veterinary journals was examined. Three hundred references were randomly selected, and the accuracy of each citation was examined. A subset of 100 references was examined for quotational accuracy; ie, the accuracy with which authors represented the work or assertions of the author being cited. Of the 300 references selected, 295 were located, and 125 major errors were found in 88 (29.8%) of them. Sixty-seven (53.6%) major errors were found involving authors, 12 (9.6%) involved the article title, 14 (11.2%) involved the book or journal title, and 32 (25.6%) involved the volume number, date, or page numbers. Sixty-eight minor errors were detected. The accuracy of 111 quotations from 95 citations in 65 articles was examined. Nine quotations were technical and not classified, 86 (84.3%) were classified as correct, 2 (1.9%) contained minor misquotations, and 14 (13.7%) contained major misquotations. We concluded that misquotations and errors in citations occur frequently in veterinary journals, but at a rate similar to that reported for other biomedical journals.
NASA Astrophysics Data System (ADS)
Józwik, Michal; Mikuła, Marta; Kozacki, Tomasz; Kostencka, Julianna; Gorecki, Christophe
2017-06-01
In this contribution, we propose a method of digital holographic microscopy (DHM) that enables measurement of high numerical aperture spherical and aspherical microstructures of both concave and convex shapes. The proposed method utilizes reflection of the spherical illumination beam from the object surface and the interference with a spherical reference beam of the similar curvature. In this case, the NA of DHM is fully utilized for illumination and imaging of the reflected object beam. Thus, the system allows capturing the phase coming from larger areas of the quasi-spherical object and, therefore, offers possibility of high accuracy characterization of its surface even in the areas of high inclination. The proposed measurement procedure allows determining all parameters required for the accurate shape recovery: the location of the object focus point and the positions of the illumination and reference point sources. The utility of the method is demonstrated with characterization of surface of high NA focusing objects. The accuracy is firstly verified by characterization of a known reference sphere with low error of sphericity. Then, the method is applied for shape measurement of spherical and aspheric microlenses. The results provide a full-field reconstruction of high NA topography with resolution in the nanometer range. The surface sphericity is evaluated by the deviation from the best fitted sphere or asphere, and the important parameters of the measured microlens: e.g.: radius of curvature and conic constant.
Schaefer, Oliver; Schmidt, Monika; Goebel, Roland; Kuepper, Harald
2012-09-01
The accuracy of impressions has been described in 1 or 2 dimensions, whereas it is most desirable to evaluate the accuracy of impressions spatially, in 3 dimensions. The purpose of this study was to demonstrate the accuracy and reproducibility of a 3-dimensional (3-D) approach to assessing impression preciseness and to quantitatively comparing the occlusal correctness of gypsum dies made with different impression materials. By using an aluminum replica of a maxillary molar, single-step dual viscosity impressions were made with 1 polyether/vinyl polysiloxane hybrid material (Identium), 1 vinyl polysiloxane (Panasil), and 1 polyether (Impregum) (n=5). Corresponding dies were made of Type IV gypsum and were optically digitized and aligned to the virtual reference of the aluminum tooth. Accuracy was analyzed by computing mean quadratic deviations between the virtual reference and the gypsum dies, while deviations of the dies among one another determined the reproducibility of the method. The virtual reference was adapted to create 15 occlusal contact points. The percentage of contact points deviating within a ±10 µm tolerance limit (PDP(10) = Percentage of Deviating Points within ±10 µm Tolerance) was set as the index for assessing occlusal accuracy. Visual results for the difference from the reference tooth were displayed with colors, whereas mean deviation values as well as mean PDP(10) differences were analyzed with a 1-way ANOVA and Scheffé post hoc comparisons (α=.05). Objective characterization of accuracy showed smooth axial surfaces to be undersized, whereas occlusal surfaces were accurate or enlarged when compared to the original tooth. The accuracy of the gypsum replicas ranged between 3 and 6 µm, while reproducibility results varied from 2 to 4 µm. Mean (SD) PDP(10)-values were: Panasil 91% (±11), Identium 77% (±4) and Impregum 29% (±3). One-way ANOVA detected significant differences among the subjected impression materials (P<.001). The accuracy and reproducibility of impressions were determined by 3-D analysis. Results were presented as color images and the newly developed PDP(10)-index was successfully used to quantify spatial dimensions for complex occlusal anatomy. Impression materials with high PDP(10)-values were shown to reproduce occlusal dimensions the most accurately. Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Common mode noise rejection properties of amplitude and phase noise in a heterodyne interferometer.
Hechenblaikner, Gerald
2013-05-01
High precision metrology systems based on heterodyne interferometry can measure the position and attitude of objects to accuracies of picometer and nanorad, respectively. A frequently found feature of the general system design is the subtraction of a reference phase from the phase of the position interferometer, which suppresses low frequency common mode amplitude and phase fluctuations occurring in volatile optical path sections shared by both the position and reference interferometer. Spectral components of the noise at frequencies around or higher than the heterodyne frequency, however, are generally transmitted into the measurement band and may limit the measurement accuracy. Detailed analytical calculations complemented with Monte Carlo simulations show that high frequency noise components may also be entirely suppressed, depending on the relative difference of measurement and reference phase, which may be exploited by corresponding design provisions. While these results are applicable to any heterodyne interferometer with certain design characteristics, specific calculations and related discussions are given for the example of the optical metrology system of the LISA Pathfinder mission to space.
Unification of height systems in the frame of GGOS
NASA Astrophysics Data System (ADS)
Sánchez, Laura
2015-04-01
Most of the existing vertical reference systems do not fulfil the accuracy requirements of modern Geodesy. They refer to local sea surface levels, are stationary (do not consider variations in time), realize different physical height types (orthometric, normal, normal-orthometric, etc.), and their combination in a global frame presents uncertainties at the metre level. To provide a precise geodetic infrastructure for monitoring the Earth system, the Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG), promotes the standardization of the height systems worldwide. The main purpose is to establish a global gravity field-related vertical reference system that (1) supports a highly-precise (at cm-level) combination of physical and geometric heights worldwide, (2) allows the unification of all existing local height datums, and (3) guarantees vertical coordinates with global consistency (the same accuracy everywhere) and long-term stability (the same order of accuracy at any time). Under this umbrella, the present contribution concentrates on the definition and realization of a conventional global vertical reference system; the standardization of the geodetic data referring to the existing height systems; and the formulation of appropriate strategies for the precise transformation of the local height datums into the global vertical reference system. The proposed vertical reference system is based on two components: a geometric component consisting of ellipsoidal heights as coordinates and a level ellipsoid as the reference surface, and a physical component comprising geopotential numbers as coordinates and an equipotential surface defined by a conventional W0 value as the reference surface. The definition of the physical component is based on potential parameters in order to provide reference to any type of physical heights (normal, orthometric, etc.). The conversion of geopotential numbers into metric heights and the modelling of the reference surface (geoid or quasigeoid determination) are considered as steps of the realization. The vertical datum unification strategy is based on (1) the physical connection of height datums to determine their discrepancies, (2) joint analysis of satellite altimetry and tide gauge records to determine time variations of sea level at reference tide gauges, (3) combination of geometrical and physical heights in a well-distributed and high-precise reference frame to estimate the relationship between the individual vertical levels and the global one, and (4) analysis of GNSS time series at reference tide gauges to separate crustal movements from sea level changes. The final vertical transformation parameters are provided by the common adjustment of the observation equations derived from these methods.
Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy
NASA Astrophysics Data System (ADS)
Batanova, V. G.; Sobolev, A. V.; Magnin, V.
2018-01-01
Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample were found to be identical (within internal precision) to reference values, suggesting that achieved precision and accuracy are similar. The spatial resolution of EPMA in a silicate matrix, even at very extreme conditions (accelerating voltage 25 kV), does not exceed 7 - 8 μm and thus is still better than laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) or secondary ion mass spectrometry (SIMS) of similar precision. These make the electron microprobe an indispensable method with applications in experimental petrology, geochemistry and cosmochemistry.
The accuracy of references in PhD theses: a case study.
Azadeh, Fereydoon; Vaez, Reyhaneh
2013-09-01
Inaccurate references and citations cause confusion, distrust in the accuracy of a report, waste of time and unnecessary financial charges for libraries, information centres and researchers. The aim of the study was to establish the accuracy of article references in PhD theses from the Tehran and Tabriz Universities of Medical Sciences and their compliance with the Vancouver style. We analysed 357 article references in the Tehran and 347 in the Tabriz. Six bibliographic elements were assessed: authors' names, article title, journal title, publication year, volume and page range. Referencing errors were divided into major and minor. Sixty two percent of references in the Tehran and 53% of those in the Tabriz were erroneous. In total, 164 references in the Tehran and 136 in the Tabriz were complete without error. Of 357 reference articles in the Tehran, 34 (9.8%) were in complete accordance with the Vancouver style, compared with none in the Tabriz. Accuracy of referencing did not differ significantly between the two groups, but compliance with the Vancouver style was significantly better in the Tehran. The accuracy of referencing was not satisfactory in both groups, and students need to gain adequate instruction in appropriate referencing methods. © 2013 The authors. Health Information and Libraries Journal © 2013 Health Libraries Group.
A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT
Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1995-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within +/-0.05% the entire system has an accuracy of +/-0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Automatic force balance calibration system
NASA Technical Reports Server (NTRS)
Ferris, Alice T. (Inventor)
1996-01-01
A system for automatically calibrating force balances is provided. The invention uses a reference balance aligned with the balance being calibrated to provide superior accuracy while minimizing the time required to complete the calibration. The reference balance and the test balance are rigidly attached together with closely aligned moment centers. Loads placed on the system equally effect each balance, and the differences in the readings of the two balances can be used to generate the calibration matrix for the test balance. Since the accuracy of the test calibration is determined by the accuracy of the reference balance and current technology allows for reference balances to be calibrated to within .+-.0.05%, the entire system has an accuracy of a .+-.0.2%. The entire apparatus is relatively small and can be mounted on a movable base for easy transport between test locations. The system can also accept a wide variety of reference balances, thus allowing calibration under diverse load and size requirements.
Preliminary results from the portable standard satellite laser ranging intercomparison with MOBLAS-7
NASA Technical Reports Server (NTRS)
Selden, Michael; Varghese, Thomas K.; Heinick, Michael; Oldham, Thomas
1993-01-01
Conventional Satellite Laser Ranging (SLR) instrumentation has been configured and successfully used to provide high-accuracy laboratory measurements on the LAGEOS-2 and TOPEX cube-corner arrays. The instrumentation, referred to as the Portable Standard, has also been used for field measurements of satellite ranges in tandem with MOBLAS-7. Preliminary results of the SLR measurements suggest that improved range accuracy can be achieved using this system. Results are discussed.
Bias in estimating accuracy of a binary screening test with differential disease verification
Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.
2011-01-01
SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059
Can multi-subpopulation reference sets improve the genomic predictive ability for pigs?
Fangmann, A; Bergfelder-Drüing, S; Tholen, E; Simianer, H; Erbe, M
2015-12-01
In most countries and for most livestock species, genomic evaluations are obtained from within-breed analyses. To achieve reliable breeding values, however, a sufficient reference sample size is essential. To increase this size, the use of multibreed reference populations for small populations is considered a suitable option in other species. Over decades, the separate breeding work of different pig breeding organizations in Germany has led to stratified subpopulations in the breed German Large White. Due to this fact and the limited number of Large White animals available in each organization, there was a pressing need for ascertaining if multi-subpopulation genomic prediction is superior compared with within-subpopulation prediction in pigs. Direct genomic breeding values were estimated with genomic BLUP for the trait "number of piglets born alive" using genotype data (Illumina Porcine 60K SNP BeadChip) from 2,053 German Large White animals from five different commercial pig breeding companies. To assess the prediction accuracy of within- and multi-subpopulation reference sets, a random 5-fold cross-validation with 20 replications was performed. The five subpopulations considered were only slightly differentiated from each other. However, the prediction accuracy of the multi-subpopulations approach was not better than that of the within-subpopulation evaluation, for which the predictive ability was already high. Reference sets composed of closely related multi-subpopulation sets performed better than sets of distantly related subpopulations but not better than the within-subpopulation approach. Despite the low differentiation of the five subpopulations, the genetic connectedness between these different subpopulations seems to be too small to improve the prediction accuracy by applying multi-subpopulation reference sets. Consequently, resources should be used for enlarging the reference population within subpopulation, for example, by adding genotyped females.
NASA Astrophysics Data System (ADS)
Chen, Ming; Guo, Jiming; Li, Zhicai; Zhang, Peng; Wu, Junli; Song, Weiwei
2017-04-01
BDS precision orbit determination is a key content of the BDS application, but the inadequate ground stations and the poor distribution of the network are the main reasons for the low accuracy of BDS precise orbit determination. In this paper, the BDS precise orbit determination results are obtained by using the IGS MGEX stations and the Chinese national reference stations,the accuracy of orbit determination of GEO, IGSO and MEO is 10.3cm, 2.8cm and 3.2cm, and the radial accuracy is 1.6cm,1.9cm and 1.5cm.The influence of ground reference stations distribution on BDS precise orbit determination is studied. The results show that the Chinese national reference stations contribute significantly to the BDS orbit determination, the overlap precision of GEO/IGSO/MEO satellites were improved by 15.5%, 57.5% and 5.3% respectively after adding the Chinese stations.Finally, the results of ODOP(orbit distribution of precision) and SLR are verified. Key words: BDS precise orbit determination; accuracy assessment;Chinese national reference stations;reference stations distribution;orbit distribution of precision
Accuracy of ab initio electron correlation and electron densities in vanadium dioxide
NASA Astrophysics Data System (ADS)
Kylänpää, Ilkka; Balachandran, Janakiraman; Ganesh, Panchapakesan; Heinonen, Olle; Kent, Paul R. C.; Krogel, Jaron T.
2017-11-01
Diffusion quantum Monte Carlo results are used as a reference to analyze properties related to phase stability and magnetism in vanadium dioxide computed with various formulations of density functional theory. We introduce metrics related to energetics, electron densities and spin densities that give us insight on both local and global variations in the antiferromagnetic M1 and R phases. Importantly, these metrics can address contributions arising from the challenging description of the 3 d orbital physics in this material. We observe that the best description of energetics between the structural phases does not correspond to the best accuracy in the charge density, which is consistent with observations made recently by Medvedev et al. [Science 355, 371 (2017), 10.1126/science.aag0410] in the context of isolated atoms. However, we do find evidence that an accurate spin density connects to correct energetic ordering of different magnetic states in VO2, although local, semilocal, and meta-GGA functionals tend to erroneously favor demagnetization of the vanadium sites. The recently developed SCAN functional stands out as remaining nearly balanced in terms of magnetization across the M1-R transition and correctly predicting the ground state crystal structure. In addition to ranking current density functionals, our reference energies and densities serve as important benchmarks for future functional development. With our reference data, the accuracy of both the energy and the electron density can be monitored simultaneously, which is useful for functional development. So far, this kind of detailed high accuracy reference data for correlated materials has been absent from the literature.
Goh, Sherry Meow Peng; Swaminathan, Muthukaruppan; Lai, Julian U-Ming; Anwar, Azlinda; Chan, Soh Ha; Cheong, Ian
2017-01-01
High Epstein Barr Virus (EBV) titers detected by the indirect Immunofluorescence Assay (IFA) are a reliable predictor of Nasopharyngeal Carcinoma (NPC). Despite being the gold standard for serological detection of NPC, the IFA is limited by scaling bottlenecks. Specifically, 5 serial dilutions of each patient sample must be prepared and visually matched by an evaluator to one of 5 discrete titers. Here, we describe a simple method for inferring continuous EBV titers from IFA images acquired from NPC-positive patient sera using only a single sample dilution. In the first part of our study, 2 blinded evaluators used a set of reference titer standards to perform independent re-evaluations of historical samples with known titers. Besides exhibiting high inter-evaluator agreement, both evaluators were also in high concordance with historical titers, thus validating the accuracy of the reference titer standards. In the second part of the study, the reference titer standards were IFA-processed and assigned an 'EBV Score' using image analysis. A log-linear relationship between titers and EBV Score was observed. This relationship was preserved even when images were acquired and analyzed 3days post-IFA. We conclude that image analysis of IFA-processed samples can be used to infer a continuous EBV titer with just a single dilution of NPC-positive patient sera. This work opens new possibilities for improving the accuracy and scalability of IFA in the context of clinical screening. Copyright © 2016. Published by Elsevier B.V.
A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem
Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.
2013-01-01
Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554
Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy
USDA-ARS?s Scientific Manuscript database
A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...
High precision UTDR measurements by sonic velocity compensation with reference transducer.
Stade, Sam; Kallioinen, Mari; Mänttäri, Mika; Tuuva, Tuure
2014-07-02
An ultrasonic sensor design with sonic velocity compensation is developed to improve the accuracy of distance measurement in membrane modules. High accuracy real-time distance measurements are needed in membrane fouling and compaction studies. The benefits of the sonic velocity compensation with a reference transducer are compared to the sonic velocity calculated with the measured temperature and pressure using the model by Belogol'skii, Sekoyan et al. In the experiments the temperature was changed from 25 to 60 °C at pressures of 0.1, 0.3 and 0.5 MPa. The set measurement distance was 17.8 mm. Distance measurements with sonic velocity compensation were over ten times more accurate than the ones calculated based on the model. Using the reference transducer measured sonic velocity, the standard deviations for the distance measurements varied from 0.6 to 2.0 µm, while using the calculated sonic velocity the standard deviations were 21-39 µm. In industrial liquors, not only the temperature and the pressure, which were studied in this paper, but also the properties of the filtered solution, such as solute concentration, density, viscosity, etc., may vary greatly, leading to inaccuracy in the use of the Belogol'skii, Sekoyan et al. model. Therefore, calibration of the sonic velocity with reference transducers is needed for accurate distance measurements.
Interlaboratory comparison measurements of aspheres
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Fortmeier, I.; Stavridis, M.; Asfour, J.; Berger, G.; Bergmann, R. B.; Beutler, A.; Blümel, T.; Klawitter, H.; Kubo, K.; Liebl, J.; Löffler, F.; Meeß, R.; Pruss, C.; Ramm, D.; Sandner, M.; Schneider, G.; Wendel, M.; Widdershoven, I.; Schulz, M.; Elster, C.
2018-05-01
The need for high-quality aspheres is rapidly growing, necessitating increased accuracy in their measurement. A reliable uncertainty assessment of asphere form measurement techniques is difficult due to their complexity. In order to explore the accuracy of current asphere form measurement techniques, an interlaboratory comparison was carried out in which four aspheres were measured by eight laboratories using tactile measurements, optical point measurements, and optical areal measurements. Altogether, 12 different devices were employed. The measurement results were analysed after subtracting the design topography and subsequently a best-fit sphere from the measurements. The surface reduced in this way was compared to a reference topography that was obtained by taking the pointwise median across the ensemble of reduced topographies on a 1000 × 1000 Cartesian grid. The deviations of the reduced topographies from the reference topography were analysed in terms of several characteristics including peak-to-valley and root-mean-square deviations. Root-mean-square deviations of the reduced topographies from the reference topographies were found to be on the order of some tens of nanometres up to 89 nm, with most of the deviations being smaller than 20 nm. Our results give an indication of the accuracy that can currently be expected in form measurements of aspheres.
A metrological approach to improve accuracy and reliability of ammonia measurements in ambient air
NASA Astrophysics Data System (ADS)
Pogány, Andrea; Balslev-Harder, David; Braban, Christine F.; Cassidy, Nathan; Ebert, Volker; Ferracci, Valerio; Hieta, Tuomas; Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Peltola, Jari; Persijn, Stefan; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard
2016-11-01
The environmental impacts of ammonia (NH3) in ambient air have become more evident in the recent decades, leading to intensifying research in this field. A number of novel analytical techniques and monitoring instruments have been developed, and the quality and availability of reference gas mixtures used for the calibration of measuring instruments has also increased significantly. However, recent inter-comparison measurements show significant discrepancies, indicating that the majority of the newly developed devices and reference materials require further thorough validation. There is a clear need for more intensive metrological research focusing on quality assurance, intercomparability and validations. MetNH3 (Metrology for ammonia in ambient air) is a three-year project within the framework of the European Metrology Research Programme (EMRP), which aims to bring metrological traceability to ambient ammonia measurements in the 0.5-500 nmol mol-1 amount fraction range. This is addressed by working in three areas: (1) improving accuracy and stability of static and dynamic reference gas mixtures, (2) developing an optical transfer standard and (3) establishing the link between high-accuracy metrological standards and field measurements. In this article we describe the concept, aims and first results of the project.
MUSCLE: multiple sequence alignment with high accuracy and high throughput.
Edgar, Robert C
2004-01-01
We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.
77 FR 47850 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-10
... function; (2) the accuracy of the estimated burden; (3) ways to enhance the quality, utility, and clarity... care provided by managed care organizations under contract to CMS is of high quality. One way of ensuring high quality care in Medicare Managed Care Organizations (MCOs), or more commonly referred to as...
Tracer Kinetic Analysis of (S)-¹⁸F-THK5117 as a PET Tracer for Assessing Tau Pathology.
Jonasson, My; Wall, Anders; Chiotis, Konstantinos; Saint-Aubert, Laure; Wilking, Helena; Sprycha, Margareta; Borg, Beatrice; Thibblin, Alf; Eriksson, Jonas; Sörensen, Jens; Antoni, Gunnar; Nordberg, Agneta; Lubberink, Mark
2016-04-01
Because a correlation between tau pathology and the clinical symptoms of Alzheimer disease (AD) has been hypothesized, there is increasing interest in developing PET tracers that bind specifically to tau protein. The aim of this study was to evaluate tracer kinetic models for quantitative analysis and generation of parametric images for the novel tau ligand (S)-(18)F-THK5117. Nine subjects (5 with AD, 4 with mild cognitive impairment) received a 90-min dynamic (S)-(18)F-THK5117 PET scan. Arterial blood was sampled for measurement of blood radioactivity and metabolite analysis. Volume-of-interest (VOI)-based analysis was performed using plasma-input models; single-tissue and 2-tissue (2TCM) compartment models and plasma-input Logan and reference tissue models; and simplified reference tissue model (SRTM), reference Logan, and SUV ratio (SUVr). Cerebellum gray matter was used as the reference region. Voxel-level analysis was performed using basis function implementations of SRTM, reference Logan, and SUVr. Regionally averaged voxel values were compared with VOI-based values from the optimal reference tissue model, and simulations were made to assess accuracy and precision. In addition to 90 min, initial 40- and 60-min data were analyzed. Plasma-input Logan distribution volume ratio (DVR)-1 values agreed well with 2TCM DVR-1 values (R(2)= 0.99, slope = 0.96). SRTM binding potential (BP(ND)) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 (R(2)= 1.00, slope ≈ 1.00) whereas SUVr(70-90)-1 values correlated less well and overestimated binding. Agreement between parametric methods and SRTM was best for reference Logan (R(2)= 0.99, slope = 1.03). SUVr(70-90)-1 values were almost 3 times higher than BP(ND) values in white matter and 1.5 times higher in gray matter. Simulations showed poorer accuracy and precision for SUVr(70-90)-1 values than for the other reference methods. SRTM BP(ND) and reference Logan DVR-1 values were not affected by a shorter scan duration of 60 min. SRTM BP(ND) and reference Logan DVR-1 values were highly correlated with plasma-input Logan DVR-1 values. VOI-based data analyses indicated robust results for scan durations of 60 min. Reference Logan generated quantitative (S)-(18)F-THK5117 DVR-1 parametric images with the greatest accuracy and precision and with a much lower white-matter signal than seen with SUVr(70-90)-1 images. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
The Extended HANDS Characterization and Analysis of Metric Biases
NASA Astrophysics Data System (ADS)
Kelecy, T.; Knox, R.; Cognion, R.
The Extended High Accuracy Network Determination System (Extended HANDS) consists of a network of low cost, high accuracy optical telescopes designed to support space surveillance and development of space object characterization technologies. Comprising off-the-shelf components, the telescopes are designed to provide sub arc-second astrometric accuracy. The design and analysis team are in the process of characterizing the system through development of an error allocation tree whose assessment is supported by simulation, data analysis, and calibration tests. The metric calibration process has revealed 1-2 arc-second biases in the right ascension and declination measurements of reference satellite position, and these have been observed to have fairly distinct characteristics that appear to have some dependence on orbit geometry and tracking rates. The work presented here outlines error models developed to aid in development of the system error budget, and examines characteristic errors (biases, time dependence, etc.) that might be present in each of the relevant system elements used in the data collection and processing, including the metric calibration processing. The relevant reference frames are identified, and include the sensor (CCD camera) reference frame, Earth-fixed topocentric frame, topocentric inertial reference frame, and the geocentric inertial reference frame. The errors modeled in each of these reference frames, when mapped into the topocentric inertial measurement frame, reveal how errors might manifest themselves through the calibration process. The error analysis results that are presented use satellite-sensor geometries taken from periods where actual measurements were collected, and reveal how modeled errors manifest themselves over those specific time periods. These results are compared to the real calibration metric data (right ascension and declination residuals), and sources of the bias are hypothesized. In turn, the actual right ascension and declination calibration residuals are also mapped to other relevant reference frames in an attempt to validate the source of the bias errors. These results will serve as the basis for more focused investigation into specific components embedded in the system and system processes that might contain the source of the observed biases.
NASA Astrophysics Data System (ADS)
Ding, Xiang; Li, Fei; Zhang, Jiyan; Liu, Wenli
2016-10-01
Raman spectrometers are usually calibrated periodically to ensure their measurement accuracy of Raman shift. A combination of a piece of monocrystalline silicon chip and a low pressure discharge lamp is proposed as a candidate for the reference standard of Raman shift. A high precision calibration technique is developed to accurately determine the standard value of the silicon's Raman shift around 520cm-1. The technique is described and illustrated by measuring a piece of silicon chip against three atomic spectral lines of a neon lamp. A commercial Raman spectrometer is employed and its error characteristics of Raman shift are investigated. Error sources are evaluated based on theoretical analysis and experiments, including the sample factor, the instrumental factor, the laser factor and random factors. Experimental results show that the expanded uncertainty of the silicon's Raman shift around 520cm-1 can acheive 0.3 cm-1 (k=2), which is more accurate than most of currently used reference materials. The results are validated by comparison measurement between three Raman spectrometers. It is proved that the technique can remarkably enhance the accuracy of Raman shift, making it possible to use the silicon and the lamp to calibrate Raman spectrometers.
Strategies for implementing genomic selection for feed efficiency in dairy cattle breeding schemes.
Wallén, S E; Lillehammer, M; Meuwissen, T H E
2017-08-01
Alternative genomic selection and traditional BLUP breeding schemes were compared for the genetic improvement of feed efficiency in simulated Norwegian Red dairy cattle populations. The change in genetic gain over time and achievable selection accuracy were studied for milk yield and residual feed intake, as a measure of feed efficiency. When including feed efficiency in genomic BLUP schemes, it was possible to achieve high selection accuracies for genomic selection, and all genomic BLUP schemes gave better genetic gain for feed efficiency than BLUP using a pedigree relationship matrix. However, introducing a second trait in the breeding goal caused a reduction in the genetic gain for milk yield. When using contracted test herds with genotyped and feed efficiency recorded cows as a reference population, adding an additional 4,000 new heifers per year to the reference population gave accuracies that were comparable to a male reference population that used progeny testing with 250 daughters per sire. When the test herd consisted of 500 or 1,000 cows, lower genetic gain was found than using progeny test records to update the reference population. It was concluded that to improve difficult to record traits, the use of contracted test herds that had additional recording (e.g., measurements required to calculate feed efficiency) is a viable option, possibly through international collaborations. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Vertical Accuracy Assessment of ZY-3 Digital Surface Model Using Icesat/glas Laser Altimeter Data
NASA Astrophysics Data System (ADS)
Li, G.; Tang, X.; Yuan, X.; Zhou, P.; Hu, F.
2017-05-01
The Ziyuan-3 (ZY-3) satellite, as the first civilian high resolution surveying and mapping satellite in China, has a very important role in national 1 : 50,000 stereo mapping project. High accuracy digital surface Model (DSMs) can be generated from the three line-array images of ZY-3, and ZY-3 DSMs of China can be produced without using any ground control points (GCPs) by selecting SRTM (Shuttle Radar Topography Mission) and ICESat/GLAS (Ice, Cloud, and land Elevation Satellite, Geo-science Laser Altimeter System) as the datum reference in the Satellite Surveying and Mapping Application Center, which is the key institute that manages and distributes ZY-3 products. To conduct the vertical accuracy evaluation of ZY-3 DSMs of China, three representative regions were chosen and the results were compared to ICESat/GLAS data. The experimental results demonstrated that the root mean square error (RMSE) elevation accuracy of the ZY-3 DSMs was better than 5.0 m, and it even reached to less than 2.5 m in the second region of eastern China. While this work presents preliminary results, it is an important reference for expanding the application of ZY-3 satellite imagery to widespread regions. And the satellite laser altimetry data can be used as referenced data for wide-area DSM evaluation.
Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo
2010-05-01
A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.
Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.
Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R
2009-07-01
The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/
Accuracy of References in Five Entomology Journals.
ERIC Educational Resources Information Center
Kristof, Cynthia
ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…
Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio
NASA Technical Reports Server (NTRS)
Thomas, James
2008-01-01
Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications
High order GPS base station support for Rhode Island
DOT National Transportation Integrated Search
2001-09-01
The University of Rhode Island (URI) upgraded its Global Positioning System (GPS) Base Station to provide round-the-clock Internet access to survey-grade (+/- 2 cm accuracy) reference files using a web-based data distribution system. In August 2000, ...
On the effectiveness of vocal imitations and verbal descriptions of sounds.
Lemaitre, Guillaume; Rocchesso, Davide
2014-02-01
Describing unidentified sounds with words is a frustrating task and vocally imitating them is often a convenient way to address the issue. This article reports on a study that compared the effectiveness of vocal imitations and verbalizations to communicate different referent sounds. The stimuli included mechanical and synthesized sounds and were selected on the basis of participants' confidence in identifying the cause of the sounds, ranging from easy-to-identify to unidentifiable sounds. The study used a selection of vocal imitations and verbalizations deemed adequate descriptions of the referent sounds. These descriptions were used in a nine-alternative forced-choice experiment: Participants listened to a description and picked one sound from a list of nine possible referent sounds. Results showed that recognition based on verbalizations was maximally effective when the referent sounds were identifiable. Recognition accuracy with verbalizations dropped when identifiability of the sounds decreased. Conversely, recognition accuracy with vocal imitations did not depend on the identifiability of the referent sounds and was as high as with the best verbalizations. This shows that vocal imitations are an effective means of representing and communicating sounds and suggests that they could be used in a number of applications.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.
Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S
2015-07-21
The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.
Rotating pressure measurement system using an on board calibration standard
NASA Technical Reports Server (NTRS)
Senyitko, Richard G.; Blumenthal, Philip Z.; Freedman, Robert J.
1991-01-01
A computer-controlled multichannel pressure measurement system was developed to acquire detailed flow field measurements on board the Large Low Speed Centrifugal Compressor Research Facility at the NASA Lewis Research Center. A pneumatic slip ring seal assembly is used to transfer calibration pressures to a reference standard transducer on board the compressor rotor in order to measure very low differential pressures with the high accuracy required. A unique data acquisition system was designed and built to convert the analog signal from the reference transducer to the variable frequency required by the multichannel pressure measurement system and also to provide an output for temperature control of the reference transducer. The system also monitors changes in test cell barometric pressure and rotating seal leakage and provides an on screen warning to the operator if limits are exceeded. The methods used for the selection and testing of the the reference transducer are discussed, and the data acquisition system hardware and software design are described. The calculated and experimental data for the system measurement accuracy are also presented.
Depth calibration of the Experimental Advanced Airborne Research Lidar, EAARL-B
Wright, C. Wayne; Kranenburg, Christine J.; Troche, Rodolfo J.; Mitchell, Richard W.; Nagle, David B.
2016-05-17
The resulting calibrated EAARL-B data were then analyzed and compared with the original reference dataset, the jet-ski-based dataset from the same Fort Lauderdale site, as well as the depth-accuracy requirements of the International Hydrographic Organization (IHO). We do not claim to meet all of the IHO requirements and standards. The IHO minimum depth-accuracy requirements were used as a reference only and we do not address the other IHO requirements such as “ Full Seafloor Search”. Our results show good agreement between the calibrated EAARL-B data and all reference datasets, with results that are within the 95 percent depth accuracy of the IHO Order 1 (a and b) depth-accuracy requirements.
Joint genomic evaluation of French dairy cattle breeds using multiple-trait models.
Karoui, Sofiene; Carabaño, María Jesús; Díaz, Clara; Legarra, Andrés
2012-12-07
Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (r(g)): uncorrelated (1), r(g) = 0; estimated r(g) (2); high, r(g) = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R(2)) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from -0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R(2) between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait.
Joint genomic evaluation of French dairy cattle breeds using multiple-trait models
2012-01-01
Background Using a multi-breed reference population might be a way of increasing the accuracy of genomic breeding values in small breeds. Models involving mixed-breed data do not take into account the fact that marker effects may differ among breeds. This study was aimed at investigating the impact on accuracy of increasing the number of genotyped candidates in the training set by using a multi-breed reference population, in contrast to single-breed genomic evaluations. Methods Three traits (milk production, fat content and female fertility) were analyzed by genomic mixed linear models and Bayesian methodology. Three breeds of French dairy cattle were used: Holstein, Montbéliarde and Normande with 2976, 950 and 970 bulls in the training population, respectively and 964, 222 and 248 bulls in the validation population, respectively. All animals were genotyped with the Illumina Bovine SNP50 array. Accuracy of genomic breeding values was evaluated under three scenarios for the correlation of genomic breeding values between breeds (rg): uncorrelated (1), rg = 0; estimated rg (2); high, rg = 0.95 (3). Accuracy and bias of predictions obtained in the validation population with the multi-breed training set were assessed by the coefficient of determination (R2) and by the regression coefficient of daughter yield deviations of validation bulls on their predicted genomic breeding values, respectively. Results The genetic variation captured by the markers for each trait was similar to that estimated for routine pedigree-based genetic evaluation. Posterior means for rg ranged from −0.01 for fertility between Montbéliarde and Normande to 0.79 for milk yield between Montbéliarde and Holstein. Differences in R2 between the three scenarios were notable only for fat content in the Montbéliarde breed: from 0.27 in scenario (1) to 0.33 in scenarios (2) and (3). Accuracies for fertility were lower than for other traits. Conclusions Using a multi-breed reference population resulted in small or no increases in accuracy. Only the breed with a small data set and large genetic correlation with the breed with a large data set showed increased accuracy for the traits with moderate (milk) to high (fat content) heritability. No benefit was observed for fertility, a lowly heritable trait. PMID:23216664
Accuracy of body mass index for age to diagnose obesity in Mexican schoolchildren.
Mendoza Pablo, Pedro A; Valdés, Jesús; Ortiz-Hernández, Luis
2015-06-01
To compare the accuracy of three BMI-forage references (World Health Organization reference, WHO; the updated International Obesity Task Force reference, IOTF; and Centers for Disease Control and Prevention (CDC) growth charts) to diagnose obesity in Mexican children. A convenience sample of Mexican schoolchildren (n = 218) was assessed. The gold standard was the percentage of body fat estimated by deuterium dilution technique. Sensitivity and specificity of the classical cutoff point of BMI-for-age to identify obesity (i.e. > 2.00 standard deviation, SD) were estimated. The accuracy (i.e. area under the curve, AUC) of three BMI-for-age references for the diagnosis of obesity was estimated with the receiver operating characteristic (ROC) curves method. The optimal cutoff point (OCP) was determined. The cutoff points to identify obesity had low (WHO reference: 57.6%, CDC: 53.5%) to very low (IOTF reference: 40.4%) sensitivities, but adequate specificities (91.6%, 95.0%, and, 97.5%, respectively). The AUC of the three references were adequate (0.89). For the IOTF reference, the AUC was lower among the older children. The OCP for the CDC reference (1.24 SD) was lower than the OCP for WHO (1.53 SD) and IOTF charts (1.47 SD). The classical cutoff point for obesity has low sensitivity--especially for the IOTF reference. The accuracy of the three references was similar. However, to obtain comparable diagnosis of obesity different cutoff points should be used depending of the reference. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Experimental studies of high-accuracy RFID localization with channel impairments
NASA Astrophysics Data System (ADS)
Pauls, Eric; Zhang, Yimin D.
2015-05-01
Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.
Widdifield, Jessica; Bombardier, Claire; Bernatsky, Sasha; Paterson, J Michael; Green, Diane; Young, Jacqueline; Ivers, Noah; Butt, Debra A; Jaakkimainen, R Liisa; Thorne, J Carter; Tu, Karen
2014-06-23
We have previously validated administrative data algorithms to identify patients with rheumatoid arthritis (RA) using rheumatology clinic records as the reference standard. Here we reassessed the accuracy of the algorithms using primary care records as the reference standard. We performed a retrospective chart abstraction study using a random sample of 7500 adult patients under the care of 83 family physicians contributing to the Electronic Medical Record Administrative data Linked Database (EMRALD) in Ontario, Canada. Using physician-reported diagnoses as the reference standard, we computed and compared the sensitivity, specificity, and predictive values for over 100 administrative data algorithms for RA case ascertainment. We identified 69 patients with RA for a lifetime RA prevalence of 0.9%. All algorithms had excellent specificity (>97%). However, sensitivity varied (75-90%) among physician billing algorithms. Despite the low prevalence of RA, most algorithms had adequate positive predictive value (PPV; 51-83%). The algorithm of "[1 hospitalization RA diagnosis code] or [3 physician RA diagnosis codes with ≥1 by a specialist over 2 years]" had a sensitivity of 78% (95% CI 69-88), specificity of 100% (95% CI 100-100), PPV of 78% (95% CI 69-88) and NPV of 100% (95% CI 100-100). Administrative data algorithms for detecting RA patients achieved a high degree of accuracy amongst the general population. However, results varied slightly from our previous report, which can be attributed to differences in the reference standards with respect to disease prevalence, spectrum of disease, and type of comparator group.
Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling
NASA Technical Reports Server (NTRS)
Kenward, T.; Lettenmaier, D. P.
1997-01-01
The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.
Erbe, M; Hayes, B J; Matukumalli, L K; Goswami, S; Bowman, P J; Reich, C M; Mason, B A; Goddard, M E
2012-07-01
Achieving accurate genomic estimated breeding values for dairy cattle requires a very large reference population of genotyped and phenotyped individuals. Assembling such reference populations has been achieved for breeds such as Holstein, but is challenging for breeds with fewer individuals. An alternative is to use a multi-breed reference population, such that smaller breeds gain some advantage in accuracy of genomic estimated breeding values (GEBV) from information from larger breeds. However, this requires that marker-quantitative trait loci associations persist across breeds. Here, we assessed the gain in accuracy of GEBV in Jersey cattle as a result of using a combined Holstein and Jersey reference population, with either 39,745 or 624,213 single nucleotide polymorphism (SNP) markers. The surrogate used for accuracy was the correlation of GEBV with daughter trait deviations in a validation population. Two methods were used to predict breeding values, either a genomic BLUP (GBLUP_mod), or a new method, BayesR, which used a mixture of normal distributions as the prior for SNP effects, including one distribution that set SNP effects to zero. The GBLUP_mod method scaled both the genomic relationship matrix and the additive relationship matrix to a base at the time the breeds diverged, and regressed the genomic relationship matrix to account for sampling errors in estimating relationship coefficients due to a finite number of markers, before combining the 2 matrices. Although these modifications did result in less biased breeding values for Jerseys compared with an unmodified genomic relationship matrix, BayesR gave the highest accuracies of GEBV for the 3 traits investigated (milk yield, fat yield, and protein yield), with an average increase in accuracy compared with GBLUP_mod across the 3 traits of 0.05 for both Jerseys and Holsteins. The advantage was limited for either Jerseys or Holsteins in using 624,213 SNP rather than 39,745 SNP (0.01 for Holsteins and 0.03 for Jerseys, averaged across traits). Even this limited and nonsignificant advantage was only observed when BayesR was used. An alternative panel, which extracted the SNP in the transcribed part of the bovine genome from the 624,213 SNP panel (to give 58,532 SNP), performed better, with an increase in accuracy of 0.03 for Jerseys across traits. This panel captures much of the increased genomic content of the 624,213 SNP panel, with the advantage of a greatly reduced number of SNP effects to estimate. Taken together, using this panel, a combined breed reference and using BayesR rather than GBLUP_mod increased the accuracy of GEBV in Jerseys from 0.43 to 0.52, averaged across the 3 traits. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Development of one-shot aspheric measurement system with a Shack-Hartmann sensor.
Furukawa, Yasunori; Takaie, Yuichi; Maeda, Yoshiki; Ohsaki, Yumiko; Takeuchi, Seiji; Hasegawa, Masanobu
2016-10-10
We present a measurement system for a rotationally symmetric aspheric surface that is designed for accurate and high-volume measurements. The system uses the Shack-Hartmann sensor and is capable of measuring aspheres with a maximum diameter of 90 mm in one shot. In our system, a reference surface, made with the same aspheric parameter as the test surface, is prepared. The test surface is recovered as the deviation from the reference surface using a figure-error reconstruction algorithm with a ray coordinate and angle variant table. In addition, we developed a method to calibrate the rotationally symmetric system error. These techniques produce stable measurements and high accuracy. For high-throughput measurements, a single measurement scheme and auto alignment are implemented; they produce a 4.5 min measurement time, including calibration and alignment. In this paper, we introduce the principle and calibration method of our system. We also demonstrate that our system achieved an accuracy better than 5.8 nm RMS and a repeatability of 0.75 nm RMS by comparing our system's aspheric measurement results with those of a probe measurement machine.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
Electromagnetic Metrics of Mental Workload.
1987-09-01
anxiety ). A decrease in performance accuracy has been used in the context of over- load, however it has also been associated with a "high workload". 1.2...heart rate variability ( HRV ) to refer to any varia- tion from a constant heart rate. The term HRV shall not refer to any specific method of numerically...in using HRV as a measure of mental load arose after Kalsbeek & Ettema (1963) reported that HRV was "gradu- ally suppressed when increasing the
Reference layer adaptive filtering (RLAF) for EEG artifact reduction in simultaneous EEG-fMRI.
Steyrl, David; Krausz, Gunther; Koschutnig, Karl; Edlinger, Günter; Müller-Putz, Gernot R
2017-04-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) combines advantages of both methods, namely high temporal resolution of EEG and high spatial resolution of fMRI. However, EEG quality is limited due to severe artifacts caused by fMRI scanners. To improve EEG data quality substantially, we introduce methods that use a reusable reference layer EEG cap prototype in combination with adaptive filtering. The first method, reference layer adaptive filtering (RLAF), uses adaptive filtering with reference layer artifact data to optimize artifact subtraction from EEG. In the second method, multi band reference layer adaptive filtering (MBRLAF), adaptive filtering is performed on bandwidth limited sub-bands of the EEG and the reference channels. The results suggests that RLAF outperforms the baseline method, average artifact subtraction, in all settings and also its direct predecessor, reference layer artifact subtraction (RLAS), in lower (<35 Hz) frequency ranges. MBRLAF is computationally more demanding than RLAF, but highly effective in all EEG frequency ranges. Effectivity is determined by visual inspection, as well as root-mean-square voltage reduction and power reduction of EEG provided that physiological EEG components such as occipital EEG alpha power and visual evoked potentials (VEP) are preserved. We demonstrate that both, RLAF and MBRLAF, improve VEP quality. For that, we calculate the mean-squared-distance of single trial VEP to the mean VEP and estimate single trial VEP classification accuracies. We found that the average mean-squared-distance is lowest and the average classification accuracy is highest after MBLAF. RLAF was second best. In conclusion, the results suggests that RLAF and MBRLAF are potentially very effective in improving EEG quality of simultaneous EEG-fMRI. Highlights We present a new and reusable reference layer cap prototype for simultaneous EEG-fMRI We introduce new algorithms for reducing EEG artifacts due to simultaneous fMRI The algorithms combine a reference layer and adaptive filtering Several evaluation criteria suggest superior effectivity in terms of artifact reduction We demonstrate that physiological EEG components are preserved.
Reference layer adaptive filtering (RLAF) for EEG artifact reduction in simultaneous EEG-fMRI
NASA Astrophysics Data System (ADS)
Steyrl, David; Krausz, Gunther; Koschutnig, Karl; Edlinger, Günter; Müller-Putz, Gernot R.
2017-04-01
Objective. Simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) combines advantages of both methods, namely high temporal resolution of EEG and high spatial resolution of fMRI. However, EEG quality is limited due to severe artifacts caused by fMRI scanners. Approach. To improve EEG data quality substantially, we introduce methods that use a reusable reference layer EEG cap prototype in combination with adaptive filtering. The first method, reference layer adaptive filtering (RLAF), uses adaptive filtering with reference layer artifact data to optimize artifact subtraction from EEG. In the second method, multi band reference layer adaptive filtering (MBRLAF), adaptive filtering is performed on bandwidth limited sub-bands of the EEG and the reference channels. Main results. The results suggests that RLAF outperforms the baseline method, average artifact subtraction, in all settings and also its direct predecessor, reference layer artifact subtraction (RLAS), in lower (<35 Hz) frequency ranges. MBRLAF is computationally more demanding than RLAF, but highly effective in all EEG frequency ranges. Effectivity is determined by visual inspection, as well as root-mean-square voltage reduction and power reduction of EEG provided that physiological EEG components such as occipital EEG alpha power and visual evoked potentials (VEP) are preserved. We demonstrate that both, RLAF and MBRLAF, improve VEP quality. For that, we calculate the mean-squared-distance of single trial VEP to the mean VEP and estimate single trial VEP classification accuracies. We found that the average mean-squared-distance is lowest and the average classification accuracy is highest after MBLAF. RLAF was second best. Significance. In conclusion, the results suggests that RLAF and MBRLAF are potentially very effective in improving EEG quality of simultaneous EEG-fMRI. Highlights We present a new and reusable reference layer cap prototype for simultaneous EEG-fMRI We introduce new algorithms for reducing EEG artifacts due to simultaneous fMRI The algorithms combine a reference layer and adaptive filtering Several evaluation criteria suggest superior effectivity in terms of artifact reduction We demonstrate that physiological EEG components are preserved
The performance of flash glucose monitoring in critically ill patients with diabetes.
Ancona, Paolo; Eastwood, Glenn M; Lucchetta, Luca; Ekinci, Elif I; Bellomo, Rinaldo; Mårtensson, Johan
2017-06-01
Frequent glucose monitoring may improve glycaemic control in critically ill patients with diabetes. We aimed to assess the accuracy of a novel subcutaneous flash glucose monitor (FreeStyle Libre [Abbott Diabetes Care]) in these patients. We applied the FreeStyle Libre sensor to the upper arm of eight patients with diabetes in the intensive care unit and obtained hourly flash glucose measurements. Duplicate recordings were obtained to assess test-retest reliability. The reference glucose level was measured in arterial or capillary blood. We determined numerical accuracy using Bland- Altman methods, the mean absolute relative difference (MARD) and whether the International Organization for Standardization (ISO) and Clinical and Laboratory Standards Institute Point of Care Testing (CLSI POCT) criteria were met. Clarke error grid (CEG) and surveillance error grid (SEG) analyses were used to determine clinical accuracy. We compared 484 duplicate flash glucose measurements and observed a Pearson correlation coefficient of 0.97 and a coefficient of repeatability of 1.6 mmol/L. We studied 185 flash readings paired with arterial glucose levels, and 89 paired with capillary glucose levels. Using the arterial glucose level as the reference, we found a mean bias of 1.4 mmol/L (limits of agreement, -1.7 to 4.5 mmol/L). The MARD was 14% (95% CI, 12%-16%) and the proportion of measurements meeting ISO and CLSI POCT criteria was 64.3% and 56.8%, respectively. The proportions of values within a low-risk zone on CEG and SEG analyses were 97.8% and 99.5%, respectively. Using capillary glucose levels as the reference, we found that numerical and clinical accuracy were lower. The subcutaneous FreeStyle Libre blood glucose measurement system showed high test-retest reliability and acceptable accuracy when compared with arterial blood glucose measurement in critically ill patients with diabetes.
Wu, Chunwei; Guan, Qingxiao; Wang, Shumei; Rong, Yueying
2017-01-01
Root of Panax ginseng C. A. Mey (Renseng in Chinese) is a famous Traditional Chinese Medicine. Ginsenosides are the major bioactive components. However, the shortage and high cost of some ginsenoside reference standards make it is difficult for quality control of P. ginseng . A method, single standard for determination of multicomponents (SSDMC), was developed for the simultaneous determination of nine ginsenosides in P. ginseng (ginsenoside Rg 1 , Re, Rf, Rg 2 , Rb 1 , Rc, Rb 2 , Rb 3 , Rd). The analytes were separated on Inertsil ODS-3 C18 (250 mm × 4.6 mm, 5 μm) with gradient elution of acetonitrile and water. The flow rate was 1 mL/min and detection wavelength was set at 203 nm. The feasibility and accuracy of SSDMC were checked by the external standard method, and various high-performance liquid chromatographic (HPLC) instruments and chromatographic conditions were investigated to verify its applicability. Using ginsenoside Rg 1 as the internal reference substance, the contents of other eight ginsenosides were calculated according to conversion factors (F) by HPLC. The method was validated with linearity ( r 2 ≥ 0.9990), precision (relative standard deviation [RSD] ≤2.9%), accuracy (97.5%-100.8%, RSD ≤ 1.6%), repeatability, and stability. There was no significant difference between the SSDMC method and the external standard method. New SSDMC method could be considered as an ideal mean to analyze the components for which reference standards are not readily available. A method, single standard for determination of multicomponents (SSDMC), was established by high-performance liquid chromatography for the simultaneous determination of nine ginsenosides in Panax ginseng (ginsenoside Rg1, Re, Rf, Rg2, Rb1, Rc, Rb2, Rb3, Rd)Various chromatographic conditions were investigated to verify applicability of FsThe feasibility and accuracy of SSDMC were checked by the external standard method. Abbreviations used: DRT: Different value of retention time; F: Conversion factor; HPLC: High-performance Liquid Chromatography; LOD: Limit of detection; LOQ: Limit of quantitation; PD: Percent difference; PPD: 20(S)-protopanaxadiol; PPT: 20(S)-protopanaxatriol; RSD: Relative standard deviation; SSDMC: Single Standard for Determination of Multicomponents; TCM: Traditional Chinese Medicine.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-09-28
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal.
How accurate are quotations and references in medical journals?
de Lacey, G; Record, C; Wade, J
1985-01-01
The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors--that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." Some suggestions for reducing these high levels of inaccuracy are that papers scheduled for publication with errors of citation should be returned to the author and checked completely and a permanent column specifically for misquotations could be inserted into the journal. PMID:3931753
Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D
2007-04-01
Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.
Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.
2008-01-01
Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265
Wu, Mixia; Zhang, Dianchen; Liu, Aiyi
2016-01-01
New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.
NASA Astrophysics Data System (ADS)
Coulot, David; Richard, Jean-Yves
2017-04-01
Many major indicators of climate change are monitored with space observations (sea level rise from satellite altimetry, ice melting from dedicated satellites, etc.). This monitoring is highly dependent on references (positions and velocities of ground observing instruments, orbits of satellites, etc.) that only geodesy can provide. The current accuracy of these references does not permit to fully support the challenges that the constantly evolving Earth system gives rise to, and can consequently limit the accuracy of these indicators. For this reason, in the framework of the Global Geodetic Observing System (GGOS), stringent requirements are fixed to the International Terrestrial Reference Frame (ITRF) for the next decade: an accuracy at the level of 1 mm and a stability at the level of 0.1 mm/yr. This means an improvement of the current quality of ITRF by a factor of 5-10. Improving the quality of the geodetic references is an issue which requires a thorough reassessment of the methodologies involved. The most relevant and promising method to improve this quality is the direct combination (Combination at Observation Level - COL) of the space-geodetic measurements used to compute the official references of the International Earth Rotation and Reference Systems Service (IERS). The GEODESIE project aims at (i) determining highly-accurate global and consistent references (time series of Terrestrial Reference Frames and Celestial Reference Frames, of Earth's Orientation Parameters, and orbits of Earth's observation satellites) and (ii) providing the geophysical and climate research communities with these references, for a better estimation of geocentric sea level rise, ice mass balance and on-going climate changes. Time series of sea levels computed from altimetric data and tide gauge records with these references (orbits of satellite altimeters, Terrestrial Reference Frames and related vertical velocities of stations) will also be provided. The geodetic references will be essential bases for Earth's observation and monitoring to support the challenges of the century. The geocentric time series of sea levels will permit to better apprehend (i) the drivers of the global mean sea level rise and of regional variations of sea level and (ii) the contribution of the global climate change induced by anthropogenic greenhouse gases emissions to these drivers. All the results and computation and quality assessment reports will be available on a Website designed and opened in the Summer of 2017. This project, supported by the French Agence Nationale de la Recherche (ANR) for the period 2017-2020, will be an unprecedented opportunity to provide the French Groupe de Recherche de Géodésie Spatiale (GRGS) with complete simulation and data processing capabilities to prepare the future arrival of space missions such as the European Geodetic Reference Antenna in SPace (E-GRASP) and to significantly contribute to the GGOS with accurate references.
NASA Astrophysics Data System (ADS)
Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.
2016-06-01
Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.
Buczinski, S; Fecteau, G; Chigerwe, M; Vandeweerd, J M
2016-06-01
Calves are highly dependent of colostrum (and antibody) intake because they are born agammaglobulinemic. The transfer of passive immunity in calves can be assessed directly by dosing immunoglobulin G (IgG) or by refractometry or Brix refractometry. The latter are easier to perform routinely in the field. This paper presents a protocol for a systematic review meta-analysis to assess the diagnostic accuracy of refractometry or Brix refractometry versus dosage of IgG as a reference standard test. With this review protocol we aim to be able to report refractometer and Brix refractometer accuracy in terms of sensitivity and specificity as well as to quantify the impact of any study characteristic on test accuracy.
Development and calibration of an accurate 6-degree-of-freedom measurement system with total station
NASA Astrophysics Data System (ADS)
Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui
2016-12-01
To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.
Thorne, John C; Coggins, Truman E; Carmichael Olson, Heather; Astley, Susan J
2007-04-01
To evaluate classification accuracy and clinical feasibility of a narrative analysis tool for identifying children with a fetal alcohol spectrum disorder (FASD). Picture-elicited narratives generated by 16 age-matched pairs of school-aged children (FASD vs. typical development [TD]) were coded for semantic elaboration and reference strategy by judges who were unaware of age, gender, and group membership of the participants. Receiver operating characteristic (ROC) curves were used to examine the classification accuracy of the resulting set of narrative measures for making 2 classifications: (a) for the 16 children diagnosed with FASD, low performance (n = 7) versus average performance (n = 9) on a standardized expressive language task and (b) FASD (n = 16) versus TD (n = 16). Combining the rates of semantic elaboration and pragmatically inappropriate reference perfectly matched a classification based on performance on the standardized language task. More importantly, the rate of ambiguous nominal reference was highly accurate in classifying children with an FASD regardless of their performance on the standardized language task (area under the ROC curve = .863, confidence interval = .736-.991). Results support further study of the diagnostic utility of narrative analysis using discourse level measures of elaboration and children's strategic use of reference.
Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output
NASA Astrophysics Data System (ADS)
Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu
2015-07-01
Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in real time. The ARTK positioning accuracy is better and more robust than the combination of phase difference over time (PDOT) and SRTK method at a high rate. The ARTK positioning accuracy is equivalent to SRTK solution when the DLTTD is 0.5 s, and centimeter level accuracy can be achieved even when DLTTD is 15 s.
Point-of-care wound visioning technology: Reproducibility and accuracy of a wound measurement app
Anderson, John A. E.; Evans, Robyn; Woo, Kevin; Beland, Benjamin; Sasseville, Denis; Moreau, Linda
2017-01-01
Background Current wound assessment practices are lacking on several measures. For example, the most common method for measuring wound size is using a ruler, which has been demonstrated to be crude and inaccurate. An increase in periwound temperature is a classic sign of infection but skin temperature is not always measured during wound assessments. To address this, we have developed a smartphone application that enables non-contact wound surface area and temperature measurements. Here we evaluate the inter-rater reliability and accuracy of this novel point-of-care wound assessment tool. Methods and findings The wounds of 87 patients were measured using the Swift Wound app and a ruler. The skin surface temperature of 37 patients was also measured using an infrared FLIR™ camera integrated with the Swift Wound app and using the clinically accepted reference thermometer Exergen DermaTemp 1001. Accuracy measurements were determined by assessing differences in surface area measurements of 15 plastic wounds between a digital planimeter of known accuracy and the Swift Wound app. To evaluate the impact of training on the reproducibility of the Swift Wound app measurements, three novice raters with no wound care training, measured the length, width and area of 12 plastic model wounds using the app. High inter-rater reliabilities (ICC = 0.97–1.00) and high accuracies were obtained using the Swift Wound app across raters of different levels of training in wound care. The ruler method also yielded reliable wound measurements (ICC = 0.92–0.97), albeit lower than that of the Swift Wound app. Furthermore, there was no statistical difference between the temperature differences measured using the infrared camera and the clinically tested reference thermometer. Conclusions The Swift Wound app provides highly reliable and accurate wound measurements. The FLIR™ infrared camera integrated into the Swift Wound app provides skin temperature readings equivalent to the clinically tested reference thermometer. Thus, the Swift Wound app has the advantage of being a non-contact, easy-to-use wound measurement tool that allows clinicians to image, measure, and track wound size and temperature from one visit to the next. In addition, this tool may also be used by patients and their caregivers for home monitoring. PMID:28817649
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
A Demonstration of GPS Landslide Monitoring Using Online Positioning User Service (OPUS)
NASA Astrophysics Data System (ADS)
Wang, G.
2011-12-01
Global Positioning System (GPS) technologies have been frequently applied to landslide study, both as a complement, and as an alternative to conventional surveying methods. However, most applications of GPS for landslide monitoring have been limited to the academic community for research purposes. High-accuracy GPS has not been widely equipped in geotechnical companies and used by technicians. The main issue that limits the applications of GPS in the practice of high-accuracy landslide monitoring is the complexity of GPS data processing. This study demonstrated an approach using the Online Positioning User Service (OPUS) (http://www.ngs.noaa.gov/OPUS) provided by the National Geodetic Survey (NGS) of National Oceanic and Atmospheric Administration (NOAA) to process GPS data and conduct long-term landslide monitoring in the Puerto Rico and Virgin Islands Region. Continuous GPS data collected at a creeping landslide site during two years were used to evaluate different scenarios for landslide surveying: continuous or campaign, long duration or short duration, morning or afternoon (different weather conditions). OPUS uses Continuously Operating Reference Station (CORS) managed by NGS (http://www.ngs.noaa.giv/CORS/) as references and user data as a rover to solve a position. There are 19 CORS permanent GPS stations in the Puerto Rico and Virgin Islands region. The dense GPS network provides a precise and reliable reference frame for subcentimeter-accuracy landslide monitoring in this region. Our criterion for the accuracy was the root-mean-square (RMS) of OPUS solutions over a 2-year period with respect to true landslide displacement time series overt the same period. The true landslide displacements were derived from a single-baseline (130 m) GPS processing by using 24-hour continuous data. If continuous GPS surveying is performed in the field, then OPUS static processing can provide 0.6 cm horizontal and 1.1 cm vertical precision with few outliers. If repeated campaign-style surveying is performed in the field, then the choice of observation time window and duration are very important. In order to detect a suspected sliding mass and track the kinematics of a creeping landslide, sub-centimeter horizontal accuracy is often required. OPUS static solutions for sessions of 4 hours or longer and OPUS rapid-static solutions for sessions as short as 15 minutes can achieve accuracy at this level if data collection during extreme weather conditions is avoided, such as rainfall and storm time. This study also indicated that rainfall events can seriously degrade the performance of high-accuracy GPS. Field GPS landslide surveying should avoid rainfall time that is usually accompanied by thunderstorms and the passage of weather fronts.
Andrews, Kimberly R; Adams, Jennifer R; Cassirer, E Frances; Plowright, Raina K; Gardner, Colby; Dwire, Maggie; Hohenlohe, Paul A; Waits, Lisette P
2018-06-05
The development of high-throughput sequencing technologies is dramatically increasing the use of single nucleotide polymorphisms (SNPs) across the field of genetics, but most parentage studies of wild populations still rely on microsatellites. We developed a bioinformatic pipeline for identifying SNP panels that are informative for parentage analysis from restriction site-associated DNA sequencing (RADseq) data. This pipeline includes options for analysis with or without a reference genome, and provides methods to maximize genotyping accuracy and select sets of unlinked loci that have high statistical power. We test this pipeline on small populations of Mexican gray wolf and bighorn sheep, for which parentage analyses are expected to be challenging due to low genetic diversity and the presence of many closely related individuals. We compare the results of parentage analysis across SNP panels generated with or without the use of a reference genome, and between SNPs and microsatellites. For Mexican gray wolf, we conducted parentage analyses for 30 pups from a single cohort where samples were available from 64% of possible mothers and 53% of possible fathers, and the accuracy of parentage assignments could be estimated because true identities of parents were known a priori based on field data. For bighorn sheep, we conducted maternity analyses for 39 lambs from five cohorts where 77% of possible mothers were sampled, but true identities of parents were unknown. Analyses with and without a reference genome produced SNP panels with >95% parentage assignment accuracy for Mexican gray wolf, outperforming microsatellites at 78% accuracy. Maternity assignments were completely consistent across all SNP panels for the bighorn sheep, and were 74.4% consistent with assignments from microsatellites. Accuracy and consistency of parentage analysis were not reduced when using as few as 284 SNPs for Mexican gray wolf and 142 SNPs for bighorn sheep, indicating our pipeline can be used to develop SNP genotyping assays for parentage analysis with relatively small numbers of loci. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The stars: an absolute radiometric reference for the on-orbit calibration of PLEIADES-HR satellites
NASA Astrophysics Data System (ADS)
Meygret, Aimé; Blanchet, Gwendoline; Mounier, Flore; Buil, Christian
2017-09-01
The accurate on-orbit radiometric calibration of optical sensors has become a challenge for space agencies who gather their effort through international working groups such as CEOS/WGCV or GSICS with the objective to insure the consistency of space measurements and to reach an absolute accuracy compatible with more and more demanding scientific needs. Different targets are traditionally used for calibration depending on the sensor or spacecraft specificities: from on-board calibration systems to ground targets, they all take advantage of our capacity to characterize and model them. But achieving the in-flight stability of a diffuser panel is always a challenge while the calibration over ground targets is often limited by their BDRF characterization and the atmosphere variability. Thanks to their agility, some satellites have the capability to view extra-terrestrial targets such as the moon or stars. The moon is widely used for calibration and its albedo is known through ROLO (RObotic Lunar Observatory) USGS model but with a poor absolute accuracy limiting its use to sensor drift monitoring or cross-calibration. Although the spectral irradiance of some stars is known with a very high accuracy, it was not really shown that they could provide an absolute reference for remote sensors calibration. This paper shows that high resolution optical sensors can be calibrated with a high absolute accuracy using stars. The agile-body PLEIADES 1A satellite is used for this demonstration. The star based calibration principle is described and the results are provided for different stars, each one being acquired several times. These results are compared to the official calibration provided by ground targets and the main error contributors are discussed.
Genotype imputation in a tropical crossbred dairy cattle population.
Oliveira Júnior, Gerson A; Chud, Tatiane C S; Ventura, Ricardo V; Garrick, Dorian J; Cole, John B; Munari, Danísio P; Ferraz, José B S; Mullart, Erik; DeNise, Sue; Smith, Shannon; da Silva, Marcos Vinícius G B
2017-12-01
The objective of this study was to investigate different strategies for genotype imputation in a population of crossbred Girolando (Gyr × Holstein) dairy cattle. The data set consisted of 478 Girolando, 583 Gyr, and 1,198 Holstein sires genotyped at high density with the Illumina BovineHD (Illumina, San Diego, CA) panel, which includes ∼777K markers. The accuracy of imputation from low (20K) and medium densities (50K and 70K) to the HD panel density and from low to 50K density were investigated. Seven scenarios using different reference populations (RPop) considering Girolando, Gyr, and Holstein breeds separately or combinations of animals of these breeds were tested for imputing genotypes of 166 randomly chosen Girolando animals. The population genotype imputation were performed using FImpute. Imputation accuracy was measured as the correlation between observed and imputed genotypes (CORR) and also as the proportion of genotypes that were imputed correctly (CR). This is the first paper on imputation accuracy in a Girolando population. The sample-specific imputation accuracies ranged from 0.38 to 0.97 (CORR) and from 0.49 to 0.96 (CR) imputing from low and medium densities to HD, and 0.41 to 0.95 (CORR) and from 0.50 to 0.94 (CR) for imputation from 20K to 50K. The CORR anim exceeded 0.96 (for 50K and 70K panels) when only Girolando animals were included in RPop (S1). We found smaller CORR anim when Gyr (S2) was used instead of Holstein (S3) as RPop. The same behavior was observed between S4 (Gyr + Girolando) and S5 (Holstein + Girolando) because the target animals were more related to the Holstein population than to the Gyr population. The highest imputation accuracies were observed for scenarios including Girolando animals in the reference population, whereas using only Gyr animals resulted in low imputation accuracies, suggesting that the haplotypes segregating in the Girolando population had a greater effect on accuracy than the purebred haplotypes. All chromosomes had similar imputation accuracies (CORR snp ) within each scenario. Crossbred animals (Girolando) must be included in the reference population to provide the best imputation accuracies. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Certified reference materials (GBW09170 and 09171) of creatinine in human serum.
Dai, Xinhua; Fang, Xiang; Shao, Mingwu; Li, Ming; Huang, Zejian; Li, Hongmei; Jiang, You; Song, Dewei; He, Yajuan
2011-02-15
Creatinine is the most widely used clinical marker for assessing renal function. Concentrations of creatinine in human serum need to be carefully checked in order to ensure accurate diagnosis of renal function. Therefore, development of certified reference materials (CRMs) of creatinine in serum is of increasing importance. In this study, two new CRMs (Nos. GBW09170 and 09171) for creatinine in human serum have been developed. They were prepared with mixtures of several dozens of healthy people's and kidney disease patient's serum, respectively. The certified values of 8.10, 34.1 mg/kg for these two CRMs have been assigned by liquid chromatography-isotope dilution mass spectrometry (LC-IDMS) method which was validated by using standard reference material (SRM) of SRM909b (a reference material obtained from National Institute of Standards and Technology, NIST). The expanded uncertainties of certified values for low and high concentrations were estimated to be 1.2 and 1.1%, respectively. The certified values were further confirmed by an international intercomparison for the determination of creatinine in human serum (Consultative Committee for Amount of Substance, CCQM) of K80 (CCQM-K80). These new CRMs of creatinine in human serum pool are totally native without additional creatinine spiked for enrichment. These new CRMs are capable of validating routine clinical methods for ensuring accuracy, reliability and comparability of analytical results from different clinical laboratories. They can also be used for instrument validation, development of secondary reference materials, and evaluating the accuracy of high order clinical methods for the determination of creatinine in human serum. Copyright © 2011 Elsevier B.V. All rights reserved.
Ejlersen, June A; May, Ole; Mortensen, Jesper; Nielsen, Gitte L; Lauridsen, Jeppe F; Allan, Johansen
2017-11-01
Patients with normal stress perfusion have an excellent prognosis. Prospective studies on the diagnostic accuracy of stress-only scans with contemporary, independent examinations as gold standards are lacking. A total of 109 patients with typical angina and no previous coronary artery disease underwent a 2-day stress (exercise)/rest, gated, and attenuation-corrected (AC), 99m-technetium-sestamibi perfusion study, followed by invasive coronary angiography. The stress datasets were evaluated twice by four physicians with two different training levels (expert and novice): familiar and unfamiliar with AC. The two experts also made a consensus reading of the integrated stress-rest datasets. The consensus reading and quantitative data from the invasive coronary angiography were applied as reference methods. The sensitivity/specificity were 0.92-1.00/0.73-0.90 (reference: expert consensus reading), 0.93-0.96/0.63-0.82 (reference: ≥1 stenosis>70%), and 0.75-0.88/0.70-0.88 (reference: ≥1 stenosis>50%). The four readers showed a high and fairly equal sensitivity independent of their familiarity with AC. The expert familiar with AC had the highest specificity independent of the reference method. The intraobserver and interobserver agreements on the stress-only readings were good (readers without AC experience) to excellent (readers with AC experience). AC stress-only images yielded a high sensitivity independent of the training level and experience with AC of the nuclear physician, whereas the specificity correlated positively with both. Interobserver and intraobserver agreements tended to be the best for physicians with AC experience.
Accuracy of Digital vs. Conventional Implant Impressions
Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.
2015-01-01
The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423
Study on the position accuracy of a mechanical alignment system
NASA Astrophysics Data System (ADS)
Cai, Yimin
In this thesis, we investigated the precision level and established the baseline achieved by a mechanical alignment system using datums and reference surfaces. The factors which affect the accuracy of mechanical alignment system were studied and methodology was developed to suppress these factors so as to reach its full potential precision. In order to characterize the mechanical alignment system quantitatively, a new optical position monitoring system by using quadrant detectors has been developed in this thesis, it can monitor multi-dimensional degrees of mechanical workpieces in real time with high precision. We studied the noise factors inside the system and optimized the optical system. Based on the fact that one of the major limiting noise factors is the shifting of the laser beam, a noise cancellation technique has been developed successfully to suppress this noise, the feasibility of an ultra high resolution (<20 A) for displacement monitoring has been demonstrated. Using the optical position monitoring system, repeatability experiment of the mechanical alignment system has been conducted on different kinds of samples including steel, aluminum, glass and plastics with the same size 100mm x 130mm. The alignment accuracy was studied quantitatively rather than qualitatively before. In a controlled environment, the alignment precision can be improved 5 folds by securing the datum without other means of help. The alignment accuracy of an aluminum workpiece having reference surface by milling is about 3 times better than by shearing. Also we have found that sample material can have fairly significant effect on the alignment precision of the system. Contamination trapped between the datum and reference surfaces in mechanical alignment system can cause errors of registration or reduce the level of manufacturing precision. In the thesis, artificial and natural dust particles were used to simulate the real situations and their effects on system precision have been investigated. In this experiment, we discovered two effective cleaning processes.
Comparison between multi-constellation ambiguity-fixed PPP and RTK for maritime precise navigation
NASA Astrophysics Data System (ADS)
Tegedor, Javier; Liu, Xianglin; Ørpen, Ole; Treffers, Niels; Goode, Matthew; Øvstedal, Ola
2015-06-01
In order to achieve high-accuracy positioning, either Real-Time Kinematic (RTK) or Precise Point Positioning (PPP) techniques can be used. While RTK normally delivers higher accuracy with shorter convergence times, PPP has been an attractive technology for maritime applications, as it delivers uniform positioning performance without the direct need of a nearby reference station. Traditional PPP has been based on ambiguity-float solutions using GPS and Glonass constellations. However, the addition of new satellite systems, such as Galileo and BeiDou, and the possibility of fixing integer carrier-phase ambiguities (PPP-AR) allow to increase PPP accuracy. In this article, a performance assessment has been done between RTK, PPP and PPP-AR, using GNSS data collected from two antennas installed on a ferry navigating in Oslo (Norway). RTK solutions have been generated using short, medium and long baselines (up to 290 km). For the generation of PPP-AR solutions, Uncalibrated Hardware Delays (UHDs) for GPS, Galileo and BeiDou have been estimated using reference stations in Oslo and Onsala. The performance of RTK and multi-constellation PPP and PPP-AR are presented.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Quadratic canonical transformation theory and higher order density matrices.
Neuscamman, Eric; Yanai, Takeshi; Chan, Garnet Kin-Lic
2009-03-28
Canonical transformation (CT) theory provides a rigorously size-extensive description of dynamic correlation in multireference systems, with an accuracy superior to and cost scaling lower than complete active space second order perturbation theory. Here we expand our previous theory by investigating (i) a commutator approximation that is applied at quadratic, as opposed to linear, order in the effective Hamiltonian, and (ii) incorporation of the three-body reduced density matrix in the operator and density matrix decompositions. The quadratic commutator approximation improves CT's accuracy when used with a single-determinant reference, repairing the previous formal disadvantage of the single-reference linear CT theory relative to singles and doubles coupled cluster theory. Calculations on the BH and HF binding curves confirm this improvement. In multireference systems, the three-body reduced density matrix increases the overall accuracy of the CT theory. Tests on the H(2)O and N(2) binding curves yield results highly competitive with expensive state-of-the-art multireference methods, such as the multireference Davidson-corrected configuration interaction (MRCI+Q), averaged coupled pair functional, and averaged quadratic coupled cluster theories.
Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.
Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim
2017-09-01
Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lösler, Michael; Haas, Rüdiger; Eschelbach, Cornelia
2013-08-01
The Global Geodetic Observing System (GGOS) requires sub-mm accuracy, automated and continual determinations of the so-called local tie vectors at co-location stations. Co-location stations host instrumentation for several space geodetic techniques and the local tie surveys involve the relative geometry of the reference points of these instruments. Thus, these reference points need to be determined in a common coordinate system, which is a particular challenge for rotating equipment like radio telescopes for geodetic Very Long Baseline Interferometry. In this work we describe a concept to achieve automated and continual determinations of radio telescope reference points with sub-mm accuracy. We developed a monitoring system, including Java-based sensor communication for automated surveys, network adjustment and further data analysis. This monitoring system was tested during a monitoring campaign performed at the Onsala Space Observatory in the summer of 2012. The results obtained in this campaign show that it is possible to perform automated determination of a radio telescope reference point during normal operations of the telescope. Accuracies on the sub-mm level can be achieved, and continual determinations can be realized by repeated determinations and recursive estimation methods.
High-order cyclo-difference techniques: An alternative to finite differences
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Otto, John C.
1993-01-01
The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.
Small arms mini-fire control system: fiber-optic barrel deflection sensor
NASA Astrophysics Data System (ADS)
Rajic, S.; Datskos, P.; Lawrence, W.; Marlar, T.; Quinton, B.
2012-06-01
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight, weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES
An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
(LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...
Wollaston prism phase-stepping point diffraction interferometer and method
Rushford, Michael C.
2004-10-12
A Wollaston prism phase-stepping point diffraction interferometer for testing a test optic. The Wollaston prism shears light into reference and signal beams, and provides phase stepping at increased accuracy by translating the Wollaston prism in a lateral direction with respect to the optical path. The reference beam produced by the Wollaston prism is directed through a pinhole of a diaphragm to produce a perfect spherical reference wave. The spherical reference wave is recombined with the signal beam to produce an interference fringe pattern of greater accuracy.
Performance of genomic prediction within and across generations in maritime pine.
Bartholomé, Jérôme; Van Heerwaarden, Joost; Isik, Fikret; Boury, Christophe; Vidal, Marjorie; Plomion, Christophe; Bouffier, Laurent
2016-08-11
Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program.
NASA Astrophysics Data System (ADS)
Wang, Qianxin; Hu, Chao; Xu, Tianhe; Chang, Guobin; Hernández Moraleda, Alberto
2017-12-01
Analysis centers (ACs) for global navigation satellite systems (GNSSs) cannot accurately obtain real-time Earth rotation parameters (ERPs). Thus, the prediction of ultra-rapid orbits in the international terrestrial reference system (ITRS) has to utilize the predicted ERPs issued by the International Earth Rotation and Reference Systems Service (IERS) or the International GNSS Service (IGS). In this study, the accuracy of ERPs predicted by IERS and IGS is analyzed. The error of the ERPs predicted for one day can reach 0.15 mas and 0.053 ms in polar motion and UT1-UTC direction, respectively. Then, the impact of ERP errors on ultra-rapid orbit prediction by GNSS is studied. The methods for orbit integration and frame transformation in orbit prediction with introduced ERP errors dominate the accuracy of the predicted orbit. Experimental results show that the transformation from the geocentric celestial references system (GCRS) to ITRS exerts the strongest effect on the accuracy of the predicted ultra-rapid orbit. To obtain the most accurate predicted ultra-rapid orbit, a corresponding real-time orbit correction method is developed. First, orbits without ERP-related errors are predicted on the basis of ITRS observed part of ultra-rapid orbit for use as reference. Then, the corresponding predicted orbit is transformed from GCRS to ITRS to adjust for the predicted ERPs. Finally, the corrected ERPs with error slopes are re-introduced to correct the predicted orbit in ITRS. To validate the proposed method, three experimental schemes are designed: function extrapolation, simulation experiments, and experiments with predicted ultra-rapid orbits and international GNSS Monitoring and Assessment System (iGMAS) products. Experimental results show that using the proposed correction method with IERS products considerably improved the accuracy of ultra-rapid orbit prediction (except the geosynchronous BeiDou orbits). The accuracy of orbit prediction is enhanced by at least 50% (error related to ERP) when a highly accurate observed orbit is used with the correction method. For iGMAS-predicted orbits, the accuracy improvement ranges from 8.5% for the inclined BeiDou orbits to 17.99% for the GPS orbits. This demonstrates that the correction method proposed by this study can optimize the ultra-rapid orbit prediction.
Uncertainty of OpenStreetMap data for the road network in Cyprus
NASA Astrophysics Data System (ADS)
Demetriou, Demetris
2016-08-01
Volunteered geographic information (VGI) refers to the geographic data compiled and created by individuals which are rendered on the Internet through specific web-based tools for diverse areas of interest. One of the most well-known VGI projects is the OpenStreetMap (OSM) that provides worldwide free geospatial data representing a variety of features. A critical issue for all VGI initiatives is the quality of the information offered. Thus, this report looks into the uncertainty of the OSM dataset for the main road network in Cyprus. The evaluation is based on three basic quality standards, namely positional accuracy, completeness and attribute accuracy. The work has been carried out by employing the Model Builder of ArcGIS which facilitated the comparison between the OSM data and the authoritative data provided by the Public Works Department (PWD). Findings showed that the positional accuracy increases with the hierarchical level of a road, it varies per administrative District and around 70% of the roads have a positional accuracy within 6m compared to the reference dataset. Completeness in terms of road length difference is around 25% for three out of four road categories examined and road name completeness is 100% and around 40% for higher and lower level roads, respectively. Attribute accuracy focusing on road name is very high for all levels of roads. These outputs indicate that OSM data are good enough if they fit for the purpose of use. Furthermore, the study revealed some weaknesses of the methods used for calculating the positional accuracy, suggesting the need for methodological improvements.
Awais, Muhammad; Khan, Dawar Burhan; Barakzai, Muhammad Danish; Rehman, Abdul; Baloch, Noor Ul-Ain; Nadeem, Naila
2018-05-01
To ascertain the accuracy and reliability of tablet as an imaging console for detection of radiological signs of acute appendicitis [on focused appendiceal computed tomography (FACT)] using Picture Archiving and Communication System (PACS) workstation as reference standard. From January, 2014 to June, 2015, 225 patients underwent FACT at our institution. These scans were blindly re-interpreted by an independent consultant radiologist, first on PACS workstation and, two weeks later, on tablet. Scans were interpreted for the presence of radiological signs of acute appendicitis. Accuracy of tablet was calculated using PACS as reference standard. Kappa (κ) statistics were calculated as a measure of reliability. Of 225 patients, 99 had radiological evidence of acute appendicitis on PACS workstation. Tablet was 100% accurate in detecting radiological signs of acute appendicitis. Appendicoliths, free fluid, lymphadenopathy, phlegmon/abscess, and perforation were identified on PACS in 90, 43, 39, 10, and 12 scans, respectively. There was excellent agreement between tablet and PACS for detection of appendicolith (к = 0.924), phlegmon/abscess (к = 0.904), free fluid (к = 0.863), lymphadenopathy (к = 0.879), and perforation (к = 0.904). Tablet computer, as an imaging console, was highly reliable and was as accurate as PACS workstation for the radiological diagnosis of acute appendicitis.
2010-01-01
Background Acute urinary tract infections (UTI) are one of the most common bacterial infections among women presenting to primary care. However, there is a lack of consensus regarding the optimal reference standard threshold for diagnosing UTI. The objective of this systematic review is to determine the diagnostic accuracy of symptoms and signs in women presenting with suspected UTI, across three different reference standards (102 or 103 or 105 CFU/ml). We also examine the diagnostic value of individual symptoms and signs combined with dipstick test results in terms of clinical decision making. Methods Searches were performed through PubMed (1966 to April 2010), EMBASE (1973 to April 2010), Cochrane library (1973 to April 2010), Google scholar and reference checking. Studies that assessed the diagnostic accuracy of symptoms and signs of an uncomplicated UTI using a urine culture from a clean-catch or catherised urine specimen as the reference standard, with a reference standard of at least ≥ 102 CFU/ml were included. Synthesised data from a high quality systematic review were used regarding dipstick results. Studies were combined using a bivariate random effects model. Results Sixteen studies incorporating 3,711 patients are included. The weighted prior probability of UTI varies across diagnostic threshold, 65.1% at ≥ 102 CFU/ml; 55.4% at ≥ 103 CFU/ml and 44.8% at ≥ 102 CFU/ml ≥ 105 CFU/ml. Six symptoms are identified as useful diagnostic symptoms when a threshold of ≥ 102 CFU/ml is the reference standard. Presence of dysuria (+LR 1.30 95% CI 1.20-1.41), frequency (+LR 1.10 95% CI 1.04-1.16), hematuria (+LR 1.72 95%CI 1.30-2.27), nocturia (+LR 1.30 95% CI 1.08-1.56) and urgency (+LR 1.22 95% CI 1.11-1.34) all increase the probability of UTI. The presence of vaginal discharge (+LR 0.65 95% CI 0.51-0.83) decreases the probability of UTI. Presence of hematuria has the highest diagnostic utility, raising the post-test probability of UTI to 75.8% at ≥ 102 CFU/ml and 67.4% at ≥ 103 CFU/ml. Probability of UTI increases to 93.3% and 90.1% at ≥ 102 CFU/ml and ≥ 103 CFU/ml respectively when presence of hematuria is combined with a positive dipstick result for nitrites. Subgroup analysis shows improved diagnostic accuracy using lower reference standards ≥ 102 CFU/ml and ≥ 103 CFU/ml. Conclusions Individual symptoms and signs have a modest ability to raise the pretest-risk of UTI. Diagnostic accuracy improves considerably when combined with dipstick tests particularly tests for nitrites. PMID:20969801
Giesen, Leonie G M; Cousins, Gráinne; Dimitrov, Borislav D; van de Laar, Floris A; Fahey, Tom
2010-10-24
Acute urinary tract infections (UTI) are one of the most common bacterial infections among women presenting to primary care. However, there is a lack of consensus regarding the optimal reference standard threshold for diagnosing UTI. The objective of this systematic review is to determine the diagnostic accuracy of symptoms and signs in women presenting with suspected UTI, across three different reference standards (10(2) or 10(3) or 10(5) CFU/ml). We also examine the diagnostic value of individual symptoms and signs combined with dipstick test results in terms of clinical decision making. Searches were performed through PubMed (1966 to April 2010), EMBASE (1973 to April 2010), Cochrane library (1973 to April 2010), Google scholar and reference checking.Studies that assessed the diagnostic accuracy of symptoms and signs of an uncomplicated UTI using a urine culture from a clean-catch or catherised urine specimen as the reference standard, with a reference standard of at least ≥ 10(2) CFU/ml were included. Synthesised data from a high quality systematic review were used regarding dipstick results. Studies were combined using a bivariate random effects model. Sixteen studies incorporating 3,711 patients are included. The weighted prior probability of UTI varies across diagnostic threshold, 65.1% at ≥ 10(2) CFU/ml; 55.4% at ≥ 10(3) CFU/ml and 44.8% at ≥ 10(2) CFU/ml ≥ 10(5) CFU/ml. Six symptoms are identified as useful diagnostic symptoms when a threshold of ≥ 10(2) CFU/ml is the reference standard. Presence of dysuria (+LR 1.30 95% CI 1.20-1.41), frequency (+LR 1.10 95% CI 1.04-1.16), hematuria (+LR 1.72 95%CI 1.30-2.27), nocturia (+LR 1.30 95% CI 1.08-1.56) and urgency (+LR 1.22 95% CI 1.11-1.34) all increase the probability of UTI. The presence of vaginal discharge (+LR 0.65 95% CI 0.51-0.83) decreases the probability of UTI. Presence of hematuria has the highest diagnostic utility, raising the post-test probability of UTI to 75.8% at ≥ 10(2) CFU/ml and 67.4% at ≥ 10(3) CFU/ml. Probability of UTI increases to 93.3% and 90.1% at ≥ 10(2) CFU/ml and ≥ 10(3) CFU/ml respectively when presence of hematuria is combined with a positive dipstick result for nitrites. Subgroup analysis shows improved diagnostic accuracy using lower reference standards ≥ 10(2) CFU/ml and ≥ 10(3) CFU/ml. Individual symptoms and signs have a modest ability to raise the pretest-risk of UTI. Diagnostic accuracy improves considerably when combined with dipstick tests particularly tests for nitrites.
Zhang, Shengwei; Arfanakis, Konstantinos
2012-01-01
Purpose To investigate the effect of standardized and study-specific human brain diffusion tensor templates on the accuracy of spatial normalization, without ignoring the important roles of data quality and registration algorithm effectiveness. Materials and Methods Two groups of diffusion tensor imaging (DTI) datasets, with and without visible artifacts, were normalized to two standardized diffusion tensor templates (IIT2, ICBM81) as well as study-specific templates, using three registration approaches. The accuracy of inter-subject spatial normalization was compared across templates, using the most effective registration technique for each template and group of data. Results It was demonstrated that, for DTI data with visible artifacts, the study-specific template resulted in significantly higher spatial normalization accuracy than standardized templates. However, for data without visible artifacts, the study-specific template and the standardized template of higher quality (IIT2) resulted in similar normalization accuracy. Conclusion For DTI data with visible artifacts, a carefully constructed study-specific template may achieve higher normalization accuracy than that of standardized templates. However, as DTI data quality improves, a high-quality standardized template may be more advantageous than a study-specific template, since in addition to high normalization accuracy, it provides a standard reference across studies, as well as automated localization/segmentation when accompanied by anatomical labels. PMID:23034880
Virdis, Salvatore Gonario Pasquale
2014-01-01
Monitoring and mapping shrimp farms, including their impact on land cover and land use, is critical to the sustainable management and planning of coastal zones. In this work, a methodology was proposed to set up a cost-effective and reproducible procedure that made use of satellite remote sensing, object-based classification approach, and open-source software for mapping aquaculture areas with high planimetric and thematic accuracy between 2005 and 2008. The analysis focused on two characteristic areas of interest of the Tam Giang-Cau Hai Lagoon (in central Vietnam), which have similar farming systems to other coastal aquaculture worldwide: the first was primarily characterised by locally referred "low tide" shrimp ponds, which are partially submerged areas; the second by earthed shrimp ponds, locally referred to as "high tide" ponds, which are non-submerged areas on the lagoon coast. The approach was based on the region-growing segmentation of high- and very high-resolution panchromatic images, SPOT5 and Worldview-1, and the unsupervised clustering classifier ISOSEG embedded on SPRING non-commercial software. The results, the accuracy of which was tested with a field-based aquaculture inventory, showed that in favourable situations (high tide shrimp ponds), the classification results provided high rates of accuracy (>95 %) through a fully automatic object-based classification. In unfavourable situations (low tide shrimp ponds), the performance degraded due to the low contrast between the water and the pond embankments. In these situations, the automatic results were improved by manual delineation of the embankments. Worldview-1 necessarily showed better thematic accuracy, and precise maps have been realised at a scale of up to 1:2,000. However, SPOT5 provided comparable results in terms of number of correctly classified ponds, but less accurate results in terms of the precision of mapped features. The procedure also demonstrated high degrees of reproducibility because it was applied to images with different spatial resolutions in an area that, during the investigated period, did not experience significant land cover changes.
Urban Modelling Performance of Next Generation SAR Missions
NASA Astrophysics Data System (ADS)
Sefercik, U. G.; Yastikli, N.; Atalay, C.
2017-09-01
In synthetic aperture radar (SAR) technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX) and Cosmo-SkyMed (CSK) since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM) acquisition for urban areas utilizing interferometric SAR (InSAR) technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS) InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS) DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8-10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.
NASA Astrophysics Data System (ADS)
Wielicki, B. A.
2016-12-01
The CLARREO (Climate Absolute Radiance and Refractivity) Pathfinder mission is a new mission started by NASA in 2016. CLARREO Pathfinder will fly a new generation of high accuracy reflected solar spectrometer in orbit on the Inernational Space Station (ISS) to demonstrate the ability to increase accuracy of reflected solar observations from space by a factor of 3 to 20. The spectrometer will use the sun and moon as calibration sources with a baseline objective of 0.3% (1 sigma) reflectance calibration uncertainty for the contiguous spectrum from 350nm to 2300nm, covering over 95% of the Earth's reflected solar spectrum. Spectral sampling is 3nm with resolution of 6nm. The spectrometer is mounted on a 2-axis gimbal enabling a new ability to use the same optical path to view the sun, moon, and Earth. Planned launch is 2020 with at least 1 year on orbit to demonstrate the new capability. The mission will also demonstrate the ability to use the new spectrometer as a reference transfer spectrometer in orbit to achieve intercalibration of reflected solar instruments to within 0.3% (1 sigma) using space, time, spectral, and angle matched observations across the full scan width of remote sensing instruments. Intercalibration to 0.3% will be demonstrated across the full scan width of the NASA CERES broadband radiometer and the NOAA VIIRS imager reflected solar spectral channels. This mission will demonstrate reflected solar intercalibration across the full swath width as opposed to current nadir only intercalibration used by GSICS (Global Space Based InterCalibration System). Intercalibration will include a new capability to determine scan angle dependence of polarization sensitivity of instruments like VIIRS. The high accuracy goals of this mission are driven primarily by the accuracy required to more rapidly and accurately observe climate change signals such as cloud feedback (see Wielicki et al. 2013 Bulletin of the American Meteorological Society). The new high accuracy and intercalibration capability will also be very useful for serving as a reference calibrator for constellations of operational instruments in Geostationary or Low Earth Orbit (e.g. land resource imagers, ocean color, cloud imagers). The higher accuracy will enable operational sensors to more effectively serve as climate change sensors.
Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro
2016-01-01
AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images, as compared to the reference magnetic resonance images. The difference in accuracy was statistically significant (P = 0.02). CONCLUSION: New generation CT scanner, using high resolution MPR images, represents a reliable diagnostic tool in assessment of loco-regional and whole body staging of advanced rectal cancer, especially in patients with MRI contraindications. PMID:27239115
McGowan, Ian; Janocko, Laura; Burneisen, Shaun; Bhat, Anand; Richardson-Harman, Nicola
2015-01-01
To determine the intra- and inter-subject variability of mucosal cytokine gene expression in rectal biopsies from healthy volunteers and to screen cytokine and chemokine mRNA as potential biomarkers of mucosal inflammation. Rectal biopsies were collected from 8 participants (3 biopsies per participant) and 1 additional participant (10 biopsies). Quantitative reverse transcription polymerase chain reaction (RT-qPCR) was used to quantify IL-1β, IL-6, IL-12p40, IL-8, IFN-γ, MIP-1α, MIP-1β, RANTES, and TNF-α gene expression in the rectal tissue. The intra-assay, inter-biopsy and inter-subject variance was measured in the eight participants. Bootstrap re-sampling of the biopsy measurements was performed to determine the accuracy of gene expression data obtained for 10 biopsies obtained from one participant. Cytokines were both non-normalized and normalized using four reference genes (GAPDH, β-actin, β2 microglobulin, and CD45). Cytokine measurement accuracy was increased with the number of biopsy samples, per person; four biopsies were typically needed to produce a mean result within a 95% confidence interval of the subject's cytokine level approximately 80% of the time. Intra-assay precision (% geometric standard deviation) ranged between 8.2 and 96.9 with high variance between patients and even between different biopsies from the same patient. Variability was not greatly reduced with the use of reference genes to normalize data. The number of biopsy samples required to provide an accurate result varied by target although 4 biopsy samples per subject and timepoint, provided for >77% accuracy across all targets tested. Biopsies within the same subjects and between subjects had similar levels of variance while variance within a biopsy (intra-assay) was generally lower. Normalization of inflammatory cytokines against reference genes failed to consistently reduce variance. The accuracy and reliability of mRNA expression of inflammatory cytokines will set a ceiling on the ability of these measures to predict mucosal inflammation. Techniques to reduce variability should be developed within a larger cohort of individuals before normative reference values can be validated. Copyright © 2014 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Quantitative real-time polymerase chain reaction (qRT-PCR) is the most important tool in measuring levels of gene expression due to its accuracy, specificity, and sensitivity. However, the accuracy of qRT-PCR analysis strongly depends on transcript normalization using stably expressed reference gene...
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
SU-E-J-89: Deformable Registration Method Using B-TPS in Radiotherapy.
Xie, Y
2012-06-01
A novel deformable registration method for four-dimensional computed tomography (4DCT) images is developed in radiation therapy. The proposed method combines the thin plate spline (TPS) and B-spline together to achieve high accuracy and high efficiency. The method consists of two steps. First, TPS is used as a global registration method to deform large unfit regions in the moving image to match counterpart in the reference image. Then B-spline is used for local registration, the previous deformed moving image is further deformed to match the reference image more accurately. Two clinical CT image sets, including one pair of lung and one pair of liver, are simulated using the proposed algorithm, which results in a tremendous improvement in both run-time and registration quality, compared with the conventional methods solely using either TPS or B-spline. The proposed method can combine the efficiency of TPS and the accuracy of B-spline, performing good adaptively and robust in registration of clinical 4DCT image. © 2012 American Association of Physicists in Medicine.
A reference standard-based quality assurance program for radiology.
Liu, Patrick T; Johnson, C Daniel; Miranda, Rafael; Patel, Maitray D; Phillips, Carrie J
2010-01-01
The authors have developed a comprehensive radiology quality assurance (QA) program that evaluates radiology interpretations and procedures by comparing them with reference standards. Performance metrics are calculated and then compared with benchmarks or goals on the basis of published multicenter data and meta-analyses. Additional workload for physicians is kept to a minimum by having trained allied health staff members perform the comparisons of radiology reports with the reference standards. The performance metrics tracked by the QA program include the accuracy of CT colonography for detecting polyps, the false-negative rate for mammographic detection of breast cancer, the accuracy of CT angiography detection of coronary artery stenosis, the accuracy of meniscal tear detection on MRI, the accuracy of carotid artery stenosis detection on MR angiography, the accuracy of parathyroid adenoma detection by parathyroid scintigraphy, the success rate for obtaining cortical tissue on ultrasound-guided core biopsies of pelvic renal transplants, and the technical success rate for peripheral arterial angioplasty procedures. In contrast with peer-review programs, this reference standard-based QA program minimizes the possibilities of reviewer bias and erroneous second reviewer interpretations. The more objective assessment of performance afforded by the QA program will provide data that can easily be used for education and management conferences, research projects, and multicenter evaluations. Additionally, such performance data could be used by radiology departments to demonstrate their value over nonradiology competitors to referring clinicians, hospitals, patients, and third-party payers. Copyright 2010 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Lee, S Hong; Clark, Sam; van der Werf, Julius H J
2017-01-01
Genomic prediction is emerging in a wide range of fields including animal and plant breeding, risk prediction in human precision medicine and forensic. It is desirable to establish a theoretical framework for genomic prediction accuracy when the reference data consists of information sources with varying degrees of relationship to the target individuals. A reference set can contain both close and distant relatives as well as 'unrelated' individuals from the wider population in the genomic prediction. The various sources of information were modeled as different populations with different effective population sizes (Ne). Both the effective number of chromosome segments (Me) and Ne are considered to be a function of the data used for prediction. We validate our theory with analyses of simulated as well as real data, and illustrate that the variation in genomic relationships with the target is a predictor of the information content of the reference set. With a similar amount of data available for each source, we show that close relatives can have a substantially larger effect on genomic prediction accuracy than lesser related individuals. We also illustrate that when prediction relies on closer relatives, there is less improvement in prediction accuracy with an increase in training data or marker panel density. We release software that can estimate the expected prediction accuracy and power when combining different reference sources with various degrees of relationship to the target, which is useful when planning genomic prediction (before or after collecting data) in animal, plant and human genetics.
Falszewska, Anna; Dziechciarz, Piotr; Szajewska, Hania
2014-10-01
To systematically update diagnostic accuracy of the Clinical Dehydration Scale (CDS) in clinical recognition of dehydration in children with acute gastroenteritis. Six databases were searched for diagnostic accuracy studies in which population were children aged 1 to 36 months with acute gastroenteritis; index test was the CDS; and reference test was post-illness weight gain. Three studies involving 360 children were included. Limited evidence showed that in high-income countries the CDS provides strong diagnostic accuracy for ruling in moderate and severe (>6%) dehydration (positive likelihood ratio 5.2-6.6), but has limited value for ruling it out (negative likelihood ratio 0.4-0.55). In low-income countries, the CDS has limited value either for ruling moderate or severe dehydration in or out. In both settings, the CDS had limited value for ruling in or out dehydration <3% or dehydration 3% to 6%. The CDS can help assess moderate to severe dehydration in high-income settings. Given the limited data, the evidence should be viewed with caution. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Wilde, C.; Langehanenberg, P.; Schenk, T.
2017-10-01
For modern production of micro lens systems, such as cementing of doublets or more lenses, precise centering of the lens edge is crucial. Blocking the lens temporarily on a centering arbor ensures that the centers of all optical lens surfaces coincide with the lens edge, while the arbor's axis serves as reference for both alignment and edging process. This theoretical assumption of the traditional cementing technology is not applicable for high-end production. In reality cement wedges between the bottom lens surface and the arbor's ring knife edge may occur and even expensive arbors with single-micron precision suffer from reduced quality of the ring knife edge after multiple usages and cleaning cycles. Consequently, at least the position of the bottom lens surface is undefined and the optical axis does not coincide with the arbor's reference axis! In order to overcome this basic problem in using centering arbors, we present a novel and efficient technique which can measure and align both surfaces of a lens with respect to the arbor axis with high accuracy and furthermore align additional lenses to the optical axis of the bottom lens. This is accomplished by aligning the lens without mechanical contact to the arbor. Thus the lens can be positioned in four degrees of freedom, while the centration errors of all lens surfaces are measured and considered. Additionally the arbor's reference axis is not assumed to be aligned to the rotation axis, but simultaneously measured with high precision.
Performance of the Micropower Voltage Reference ADR3430 Under Extreme Temperatures
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2011-01-01
Electronic systems designed for use in space exploration systems are expected to be exposed to harsh temperatures. For example, operation at cryogenic temperatures is anticipated in space missions such as polar craters of the moon (-223 C), James Webb Space Telescope (-236 C), Mars (-140 C), Europa (-223 C), Titan (-178 C), and other deep space probes away from the sun. Similarly, rovers and landers on the lunar surface, and deep space probes intended for the exploration of Venus are expected to encounter high temperature extremes. Electronics capable of operation under extreme temperatures would not only meet the requirements of future spacebased systems, but would also contribute to enhancing efficiency and improving reliability of these systems through the elimination of the thermal control elements that present electronics need for proper operation under the harsh environment of space. In this work, the performance of a micropower, high accuracy voltage reference was evaluated over a wide temperature range. The Analog Devices ADR3430 chip uses a patented voltage reference architecture to achieve high accuracy, low temperature coefficient, and low noise in a CMOS process [1]. The device combines two voltages of opposite temperature coefficients to create an output voltage that is almost independent of ambient temperature. It is rated for the industrial temperature range of -40 C to +125 C, and is ideal for use in low power precision data acquisition systems and in battery-powered devices. Table 1 shows some of the manufacturer s device specifications.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
NIST High Accuracy Reference Reflectometer-Spectrophotometer
Proctor, James E.; Yvonne Barnes, P.
1996-01-01
A new reflectometer-spectrophotometer has been designed and constructed using state-of-the-art technology to enhance optical properties of materials measurements over the ultraviolet, visible, and near-infrared (UV-Vis-NIR) wavelength range (200 nm to 2500 nm). The instrument, Spectral Tri-function Automated Reference Reflectometer (STARR), is capable of measuring specular and diffuse reflectance, bidirectional reflectance distribution function (BRDF) of diffuse samples, and both diffuse and non-diffuse transmittance. Samples up to 30 cm by 30 cm can be measured. The instrument and its characterization are described. PMID:27805081
Empathic Embarrassment Accuracy in Autism Spectrum Disorder.
Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G
2015-06-01
Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Thematic accuracy of the NLCD 2001 land cover for the conterminous United States
Wickham, J.D.; Stehman, S.V.; Fry, J.A.; Smith, J.H.; Homer, Collin G.
2010-01-01
The land-cover thematic accuracy of NLCD 2001 was assessed from a probability-sample of 15,000 pixels. Nationwide, NLCD 2001 overall Anderson Level II and Level I accuracies were 78.7% and 85.3%, respectively. By comparison, overall accuracies at Level II and Level I for the NLCD 1992 were 58% and 80%. Forest and cropland were two classes showing substantial improvements in accuracy in NLCD 2001 relative to NLCD 1992. NLCD 2001 forest and cropland user's accuracies were 87% and 82%, respectively, compared to 80% and 43% for NLCD 1992. Accuracy results are reported for 10 geographic regions of the United States, with regional overall accuracies ranging from 68% to 86% for Level II and from 79% to 91% at Level I. Geographic variation in class-specific accuracy was strongly associated with the phenomenon that regionally more abundant land-cover classes had higher accuracy. Accuracy estimates based on several definitions of agreement are reported to provide an indication of the potential impact of reference data error on accuracy. Drawing on our experience from two NLCD national accuracy assessments, we discuss the use of designs incorporating auxiliary data to more seamlessly quantify reference data quality as a means to further advance thematic map accuracy assessment.
High accuracy OMEGA timekeeping
NASA Technical Reports Server (NTRS)
Imbier, E. A.
1982-01-01
The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.
A novel ultra-wideband 80 GHz FMCW radar system for contactless monitoring of vital signs.
Wang, Siying; Pohl, Antje; Jaeschke, Timo; Czaplik, Michael; Köny, Marcus; Leonhardt, Steffen; Pohl, Nils
2015-01-01
In this paper an ultra-wideband 80 GHz FMCW-radar system for contactless monitoring of respiration and heart rate is investigated and compared to a standard monitoring system with ECG and CO(2) measurements as reference. The novel FMCW-radar enables the detection of the physiological displacement of the skin surface with submillimeter accuracy. This high accuracy is achieved with a large bandwidth of 10 GHz and the combination of intermediate frequency and phase evaluation. This concept is validated with a radar system simulation and experimental measurements are performed with different radar sensor positions and orientations.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2012 CFR
2012-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM2.5 and PM10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Code of Federal Regulations, 2013 CFR
2013-07-01
.... accuracy 3. Filter temp. control accuracy, sampling and non-sampling 1. 2 °C2. 2 °C 3. Not more than 5 °C... Reference and Class I Equivalent Methods for PM 2.5 and PM 10-2.5 E Table E-1 to Subpart E of Part 53... MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance...
Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D
2015-11-01
Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Dachao; Xu, Qingmei; Liu, Yu; Wang, Ridong; Xu, Kexin; Yu, Haixia
2017-11-01
A high-accuracy microdialysis method that can provide the reference values of glucose concentration in interstitial fluid for the accurate evaluation of non-invasive and minimally invasive continuous glucose monitoring is reported in this study. The parameters of the microdialysis process were firstly optimized by testing and analyzing three main factors that impact microdialysis recovery, including the perfusion rate, temperature, and glucose concentration in the area surrounding the microdialysis probe. The precision of the optimized microdialysis method was then determined in a simulation system that was designed and established in this study to simulate variations in continuous glucose concentration in the human body. Finally, the microdialysis method was tested for in vivo interstitial glucose concentration measurement.
NASA Astrophysics Data System (ADS)
Wayson, Michael B.; Bolch, Wesley E.
2018-04-01
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Wayson, Michael B; Bolch, Wesley E
2018-04-13
Internal radiation dose estimates for diagnostic nuclear medicine procedures are typically calculated for a reference individual. Resultantly, there is uncertainty when determining the organ doses to patients who are not at 50th percentile on either height or weight. This study aims to better personalize internal radiation dose estimates for individual patients by modifying the dose estimates calculated for reference individuals based on easily obtainable morphometric characteristics of the patient. Phantoms of different sitting heights and waist circumferences were constructed based on computational reference phantoms for the newborn, 10 year-old, and adult. Monoenergetic photons and electrons were then simulated separately at 15 energies. Photon and electron specific absorbed fractions (SAFs) were computed for the newly constructed non-reference phantoms and compared to SAFs previously generated for the age-matched reference phantoms. Differences in SAFs were correlated to changes in sitting height and waist circumference to develop scaling factors that could be applied to reference SAFs as morphometry corrections. A further set of arbitrary non-reference phantoms were then constructed and used in validation studies for the SAF scaling factors. Both photon and electron dose scaling methods were found to increase average accuracy when sitting height was used as the scaling parameter (~11%). Photon waist circumference-based scaling factors showed modest increases in average accuracy (~7%) for underweight individuals, but not for overweight individuals. Electron waist circumference-based scaling factors did not show increases in average accuracy. When sitting height and waist circumference scaling factors were combined, modest average gains in accuracy were observed for photons (~6%), but not for electrons. Both photon and electron absorbed doses are more reliably scaled using scaling factors computed in this study. They can be effectively scaled using sitting height alone as patient-specific morphometric parameter.
Conversion of ICRP male reference phantom to polygon-surface phantom
NASA Astrophysics Data System (ADS)
Yeom, Yeon Soo; Han, Min Cheol; Kim, Chan Hyeong; Jeong, Jong Hwi
2013-10-01
The International Commission on Radiological Protection (ICRP) reference phantoms, developed based on computed tomography images of human bodies, provide much more realism of human anatomy than the previously used MIRD5 (Medical Internal Radiation Dose) mathematical phantoms. It has been, however, realized that the ICRP reference phantoms have some critical limitations showing a considerable amount of holes for the skin and wall organs mainly due to the nature of voxels of which the phantoms are made, especially due to their low voxel resolutions. To address this problem, we are planning to develop the polygon-surface version of ICRP reference phantoms by directly converting the ICRP reference phantoms (voxel phantoms) to polygon-surface phantoms. The objective of this preliminary study is to see if it is indeed possible to construct the high-quality polygon-surface phantoms based on the ICRP reference phantoms maintaining identical organ morphology and also to identify any potential issues, and technologies to address these issues, in advance. For this purpose, in the present study, the ICRP reference male phantom was roughly converted to a polygon-surface phantom. Then, the constructed phantom was implemented in Geant4, Monte Carlo particle transport code, for dose calculations, and the calculated dose values were compared with those of the original ICRP reference phantom to see how much the calculated dose values are sensitive to the accuracy of the conversion process. The results of the present study show that it is certainly possible to convert the ICRP reference phantoms to surface phantoms with enough accuracy. In spite of using relatively less resources (<2 man-months), we were able to construct the polygon-surface phantom with the organ masses perfectly matching the ICRP reference values. The analysis of the calculated dose values also implies that the dose values are indeed not very sensitive to the detailed morphology of the organ models in the phantom for highly penetrating radiations such as photons and neutrons. The results of the electron beams, on the other hand, show that the dose values of the polygon-surface phantom are higher by a factor of 2-5 times than those of the ICRP reference phantom for the skin and wall organs which have large holes due to low voxel resolution. The results demonstrate that the ICRP reference phantom could provide significantly unreasonable dose values to thin or wall organs especially for weakly penetrating radiations. Therefore, when compared to the original ICRP reference phantoms, it is believed that the polygon-surface version of ICRP reference phantoms properly developed will not only provide the same or similar dose values (say, difference <5 or 10%) for highly penetrating radiations, but also provide correct dose values for the weakly penetrating radiations such as electrons and other charged particles.
The value of cows in reference populations for genomic selection of new functional traits.
Buch, L H; Kargo, M; Berg, P; Lassen, J; Sørensen, A C
2012-06-01
Today, almost all reference populations consist of progeny tested bulls. However, older progeny tested bulls do not have reliable estimated breeding values (EBV) for new traits. Thus, to be able to select for these new traits, it is necessary to build a reference population. We used a deterministic prediction model to test the hypothesis that the value of cows in reference populations depends on the availability of phenotypic records. To test the hypothesis, we investigated different strategies of building a reference population for a new functional trait over a 10-year period. The trait was either recorded on a large scale (30 000 cows per year) or on a small scale (2000 cows per year). For large-scale recording, we compared four scenarios where the reference population consisted of 30 sires; 30 sires and 170 test bulls; 30 sires and 2000 cows; or 30 sires, 2000 cows and 170 test bulls in the first year with measurements of the new functional trait. In addition to varying the make-up of the reference population, we also varied the heritability of the trait (h2 = 0.05 v. 0.15). The results showed that a reference population of test bulls, cows and sires results in the highest accuracy of the direct genomic values (DGV) for a new functional trait, regardless of its heritability. For small-scale recording, we compared two scenarios where the reference population consisted of the 2000 cows with phenotypic records or the 30 sires of these cows in the first year with measurements of the new functional trait. The results showed that a reference population of cows results in the highest accuracy of the DGV whether the heritability is 0.05 or 0.15, because variation is lost when phenotypic data on cows are summarized in EBV of their sires. The main conclusions from this study are: (i) the fewer phenotypic records, the larger effect of including cows in the reference population; (ii) for small-scale recording, the accuracy of the DGV will continue to increase for several years, whereas the increases in the accuracy of the DGV quickly decrease with large-scale recording; (iii) it is possible to achieve accuracies of the DGV that enable selection for new functional traits recorded on a large scale within 3 years from commencement of recording; and (iv) a higher heritability benefits a reference population of cows more than a reference population of bulls.
High resolution quantitative phase imaging of live cells with constrained optimization approach
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2016-03-01
Quantitative phase imaging (QPI) aims at studying weakly scattering and absorbing biological specimens with subwavelength accuracy without any external staining mechanisms. Use of a reference beam at an angle is one of the necessary criteria for recording of high resolution holograms in most of the interferometric methods used for quantitative phase imaging. The spatial separation of the dc and twin images is decided by the reference beam angle and Fourier-filtered reconstructed image will have a very poor resolution if hologram is recorded below a minimum reference angle condition. However, it is always inconvenient to have a large reference beam angle while performing high resolution microscopy of live cells and biological specimens with nanometric features. In this paper, we treat reconstruction of digital holographic microscopy images as a constrained optimization problem with smoothness constraint in order to recover only complex object field in hologram plane even with overlapping dc and twin image terms. We solve this optimization problem by gradient descent approach iteratively and the smoothness constraint is implemented by spatial averaging with appropriate size. This approach will give excellent high resolution image recovery compared to Fourier filtering while keeping a very small reference angle. We demonstrate this approach on digital holographic microscopy of live cells by recovering the quantitative phase of live cells from a hologram recorded with nearly zero reference angle.
Genotype Imputation for Latinos Using the HapMap and 1000 Genomes Project Reference Panels.
Gao, Xiaoyi; Haritunians, Talin; Marjoram, Paul; McKean-Cowdin, Roberta; Torres, Mina; Taylor, Kent D; Rotter, Jerome I; Gauderman, William J; Varma, Rohit
2012-01-01
Genotype imputation is a vital tool in genome-wide association studies (GWAS) and meta-analyses of multiple GWAS results. Imputation enables researchers to increase genomic coverage and to pool data generated using different genotyping platforms. HapMap samples are often employed as the reference panel. More recently, the 1000 Genomes Project resource is becoming the primary source for reference panels. Multiple GWAS and meta-analyses are targeting Latinos, the most populous, and fastest growing minority group in the US. However, genotype imputation resources for Latinos are rather limited compared to individuals of European ancestry at present, largely because of the lack of good reference data. One choice of reference panel for Latinos is one derived from the population of Mexican individuals in Los Angeles contained in the HapMap Phase 3 project and the 1000 Genomes Project. However, a detailed evaluation of the quality of the imputed genotypes derived from the public reference panels has not yet been reported. Using simulation studies, the Illumina OmniExpress GWAS data from the Los Angles Latino Eye Study and the MACH software package, we evaluated the accuracy of genotype imputation in Latinos. Our results show that the 1000 Genomes Project AMR + CEU + YRI reference panel provides the highest imputation accuracy for Latinos, and that also including Asian samples in the panel can reduce imputation accuracy. We also provide the imputation accuracy for each autosomal chromosome using the 1000 Genomes Project panel for Latinos. Our results serve as a guide to future imputation based analysis in Latinos.
PPP Sliding Window Algorithm and Its Application in Deformation Monitoring.
Song, Weiwei; Zhang, Rui; Yao, Yibin; Liu, Yanyan; Hu, Yuming
2016-05-31
Compared with the double-difference relative positioning method, the precise point positioning (PPP) algorithm can avoid the selection of a static reference station and directly measure the three-dimensional position changes at the observation site and exhibit superiority in a variety of deformation monitoring applications. However, because of the influence of various observing errors, the accuracy of PPP is generally at the cm-dm level, which cannot meet the requirements needed for high precision deformation monitoring. For most of the monitoring applications, the observation stations maintain stationary, which can be provided as a priori constraint information. In this paper, a new PPP algorithm based on a sliding window was proposed to improve the positioning accuracy. Firstly, data from IGS tracking station was processed using both traditional and new PPP algorithm; the results showed that the new algorithm can effectively improve positioning accuracy, especially for the elevation direction. Then, an earthquake simulation platform was used to simulate an earthquake event; the results illustrated that the new algorithm can effectively detect the vibrations change of a reference station during an earthquake. At last, the observed Wenchuan earthquake experimental results showed that the new algorithm was feasible to monitor the real earthquakes and provide early-warning alerts.
Development of a machine learning potential for graphene
NASA Astrophysics Data System (ADS)
Rowe, Patrick; Csányi, Gábor; Alfè, Dario; Michaelides, Angelos
2018-02-01
We present an accurate interatomic potential for graphene, constructed using the Gaussian approximation potential (GAP) machine learning methodology. This GAP model obtains a faithful representation of a density functional theory (DFT) potential energy surface, facilitating highly accurate (approaching the accuracy of ab initio methods) molecular dynamics simulations. This is achieved at a computational cost which is orders of magnitude lower than that of comparable calculations which directly invoke electronic structure methods. We evaluate the accuracy of our machine learning model alongside that of a number of popular empirical and bond-order potentials, using both experimental and ab initio data as references. We find that whilst significant discrepancies exist between the empirical interatomic potentials and the reference data—and amongst the empirical potentials themselves—the machine learning model introduced here provides exemplary performance in all of the tested areas. The calculated properties include: graphene phonon dispersion curves at 0 K (which we predict with sub-meV accuracy), phonon spectra at finite temperature, in-plane thermal expansion up to 2500 K as compared to NPT ab initio molecular dynamics simulations and a comparison of the thermally induced dispersion of graphene Raman bands to experimental observations. We have made our potential freely available online at [http://www.libatoms.org].
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Spatial and thematic assessment of object-based forest stand delineation using an OFA-matrix
NASA Astrophysics Data System (ADS)
Hernando, A.; Tiede, D.; Albrecht, F.; Lang, S.
2012-10-01
The delineation and classification of forest stands is a crucial aspect of forest management. Object-based image analysis (OBIA) can be used to produce detailed maps of forest stands from either orthophotos or very high resolution satellite imagery. However, measures are then required for evaluating and quantifying both the spatial and thematic accuracy of the OBIA output. In this paper we present an approach for delineating forest stands and a new Object Fate Analysis (OFA) matrix for accuracy assessment. A two-level object-based orthophoto analysis was first carried out to delineate stands on the Dehesa Boyal public land in central Spain (Avila Province). Two structural features were first created for use in class modelling, enabling good differentiation between stands: a relational tree cover cluster feature, and an arithmetic ratio shadow/tree feature. We then extended the OFA comparison approach with an OFA-matrix to enable concurrent validation of thematic and spatial accuracies. Its diagonal shows the proportion of spatial and thematic coincidence between a reference data and the corresponding classification. New parameters for Spatial Thematic Loyalty (STL), Spatial Thematic Loyalty Overall (STLOVERALL) and Maximal Interfering Object (MIO) are introduced to summarise the OFA-matrix accuracy assessment. A stands map generated by OBIA (classification data) was compared with a map of the same area produced from photo interpretation and field data (reference data). In our example the OFA-matrix results indicate good spatial and thematic accuracies (>65%) for all stand classes except for the shrub stands (31.8%), and a good STLOVERALL (69.8%). The OFA-matrix has therefore been shown to be a valid tool for OBIA accuracy assessment.
Wang, Feng-Fei; Luo, A-Li; Zhao, Yong-Heng
2014-02-01
The radial velocity of the star is very important for the study of the dynamics structure and chemistry evolution of the Milky Way, is also an useful tool for looking for variable or special objects. In the present work, we focus on calculating the radial velocity of different spectral types of low-resolution stellar spectra by adopting a template matching method, so as to provide effective and reliable reference to the different aspects of scientific research We choose high signal-to-noise ratio (SNR) spectra of different spectral type stellar from the Sloan Digital Sky Survey (SDSS), and add different noise to simulate the stellar spectra with different SNR. Then we obtain theradial velocity measurement accuracy of different spectral type stellar spectra at different SNR by employing a template matching method. Meanwhile, the radial velocity measurement accuracy of white dwarf stars is analyzed as well. We concluded that the accuracy of radial velocity measurements of early-type stars is much higher than late-type ones. For example, the 1-sigma standard error of radial velocity measurements of A-type stars is 5-8 times as large as K-type and M-type stars. We discuss the reason and suggest that the very narrow lines of late-type stars ensure the accuracy of measurement of radial velocities, while the early-type stars with very wide Balmer lines, such as A-type stars, become sensitive to noise and obtain low accuracy of radial velocities. For the spectra of white dwarfs stars, the standard error of radial velocity measurement could be over 50 km x s(-1) because of their extremely wide Balmer lines. The above conclusion will provide a good reference for stellar scientific study.
Howie, Bryan N.; Donnelly, Peter; Marchini, Jonathan
2009-01-01
Genotype imputation methods are now being widely used in the analysis of genome-wide association studies. Most imputation analyses to date have used the HapMap as a reference dataset, but new reference panels (such as controls genotyped on multiple SNP chips and densely typed samples from the 1,000 Genomes Project) will soon allow a broader range of SNPs to be imputed with higher accuracy, thereby increasing power. We describe a genotype imputation method (IMPUTE version 2) that is designed to address the challenges presented by these new datasets. The main innovation of our approach is a flexible modelling framework that increases accuracy and combines information across multiple reference panels while remaining computationally feasible. We find that IMPUTE v2 attains higher accuracy than other methods when the HapMap provides the sole reference panel, but that the size of the panel constrains the improvements that can be made. We also find that imputation accuracy can be greatly enhanced by expanding the reference panel to contain thousands of chromosomes and that IMPUTE v2 outperforms other methods in this setting at both rare and common SNPs, with overall error rates that are 15%–20% lower than those of the closest competing method. One particularly challenging aspect of next-generation association studies is to integrate information across multiple reference panels genotyped on different sets of SNPs; we show that our approach to this problem has practical advantages over other suggested solutions. PMID:19543373
New Astronomical Reduction of Old Observations (the NAROO project)
NASA Astrophysics Data System (ADS)
Arlot, Jean-Eudes; Robert, Vincent; Lainey, Valery; Neiner, Coralie; Thouvenin, Nicolas
2018-04-01
The Gaia astrometric reference catalogue will provide star proper motions with an accuracy of one mas one century ago for stars of magnitude 14 or brighter. Our project is to re-reduced the old observations with the new catalogue allowing to have an astrometric accuracy only limited by the observational biases and not by reference stars. Then, we plan to get an accuracy of 50 mas where the old reductions were not better than 500 mas! For our purpose, we will digitize old photographic plates with a sub-micrometric scanner. Tests were made using the UCAC catalogue showing that old photographic plates have an intrinsect accuracy of 30 to 60 mas.
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Factors interfering with the accuracy of five blood glucose meters used in Chinese hospitals.
Lv, Hong; Zhang, Guo-jun; Kang, Xi-xiong; Yuan, Hui; Lv, Yan-wei; Wang, Wen-wen; Randall, Rollins
2013-09-01
The prevalence of diabetes is increasing in China. Glucose control is very important in diabetic patients. The aim of this study was to compare the accuracy of five glucose meters used in Chinese hospitals with a reference method, in the absence and presence of various factors that may interfere with the meters. Within-run precision of the meters was evaluated include Roche Accu-Chek Inform®, Abbott Precision PCx FreeStyle®, Bayer Contour®, J&J LifeScan SureStep Flexx®, and Nova Biomedical StatStrip®. The interference of hematocrit level, maltose, ascorbic acid, acetaminophen, galactose, dopamine, and uric acid were tested in three levels of blood glucose, namely low, medium, and high concentrations. Accuracy (bias) of the meters and analytical interference by various factors were evaluated by comparing results obtained in whole blood specimens with those in plasma samples of the whole blood specimens run on the reference method. Impact of oxygen tension on above five blood glucose meters was detected. Precision was acceptable and slightly different between meters. There were no significant differences in the measurements between the meters and the reference method. The hematocrit level significantly interfered with all meters, except StatStrip. Measurements were affected to varying degrees by different substances at different glucose levels, e.g. acetaminophen and ascorbic acid (Freestyle), maltose and galactose (FreeStyle, Accu-Chek), uric acid (FreeStyle, Bayer Contour), and dopamine (Bayer Contour). The measurements with the five meters showed a good correlation with the plasma hexokinase reference method, but most were affected by the hematocrit level. Some meters also showed marked interference by other substances. © 2013 Wiley Periodicals, Inc.
INFLUENCE OF THE GALACTIC GRAVITATIONAL FIELD ON THE POSITIONAL ACCURACY OF EXTRAGALACTIC SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larchenkova, Tatiana I.; Lutovinov, Alexander A.; Lyskova, Natalya S.
We investigate the influence of random variations of the Galactic gravitational field on the apparent celestial positions of extragalactic sources. The basic statistical characteristics of a stochastic process (first-order moments, an autocorrelation function and a power spectral density) are used to describe a light ray deflection in a gravitational field of randomly moving point masses as a function of the source coordinates. We map a 2D distribution of the standard deviation of the angular shifts in positions of distant sources (including reference sources of the International Celestial Reference Frame) with respect to their true positions. For different Galactic matter distributionsmore » the standard deviation of the offset angle can reach several tens of μ as (microarcsecond) toward the Galactic center, decreasing down to 4–6 μ as at high galactic latitudes. The conditional standard deviation (“jitter”) of 2.5 μ as is reached within 10 years at high galactic latitudes and within a few months toward the inner part of the Galaxy. The photometric microlensing events are not expected to be disturbed by astrometric random variations anywhere except the inner part of the Galaxy as the Einstein–Chvolson times are typically much shorter than the jittering timescale. While a jitter of a single reference source can be up to dozens of μ as over some reasonable observational time, using a sample of reference sources would reduce the error in relative astrometry. The obtained results can be used for estimating the physical upper limits on the time-dependent accuracy of astrometric measurements.« less
NASA Astrophysics Data System (ADS)
Munoz, Joshua
The primary focus of this research is evaluation of feasibility, applicability, and accuracy of Doppler Light Detection And Ranging (LIDAR) sensors as non-contact means for measuring track speed, distance traveled, and curvature. Speed histories, currently measured with a rotary, wheelmounted encoder, serve a number of useful purposes, one significant use involving derailment investigations. Distance calculation provides a spatial reference system for operators to locate track sections of interest. Railroad curves, using an IMU to measure curvature, are monitored to maintain track infrastructure within regulations. Speed measured with high accuracy leads to highfidelity distance and curvature data through utilization of processor clock rate and left-and rightrail speed differentials during curve navigation, respectively. Wheel-mounted encoders, or tachometers, provide a relatively low-resolution speed profile, exhibit increased noise with increasing speed, and are subject to the inertial behavior of the rail car which affects output data. The IMU used to measure curvature is dependent on acceleration and yaw rate sensitivity and experiences difficulty in low-speed conditions. Preliminary system tests onboard a "Hy-Rail" utility vehicle capable of traveling on rail show speed capture is possible using the rails as the reference moving target and furthermore, obtaining speed profiles from both rails allows for the calculation of speed differentials in curves to estimate degrees curvature. Ground truth distance calibration and curve measurement were also carried out. Distance calibration involved placement of spatial landmarks detected by a sensor to synchronize distance measurements as a pre-processing procedure. Curvature ground truth measurements provided a reference system to confirm measurement results and observe alignment variation throughout a curve. Primary testing occurred onboard a track geometry rail car, measuring rail speed over substantial mileage in various weather conditions, providing highaccuracy data to further calculate distance and curvature along the test routes. Tests results indicate the LIDAR system measures speed at higher accuracy than the encoder, absent of noise influenced by increasing speed. Distance calculation is also high in accuracy, results showing high correlation with encoder and ground truth data. Finally, curvature calculation using speed data is shown to have good correlation with IMU measurements and a resolution capable of revealing localized track alignments. Further investigations involve a curve measurement algorithm and speed calibration method independent from external reference systems, namely encoder and ground truth data. The speed calibration results show a high correlation with speed data from the track geometry vehicle. It is recommended that the study be extended to provide assessment of the LIDAR's sensitivity to car body motion in order to better isolate the embedded behavior in the speed and curvature profiles. Furthermore, in the interest of progressing the system toward a commercially viable unit, methods for self-calibration and pre-processing to allow for fully independent operation is highly encouraged.
Raabe, E.A.; Stumpf, R.P.; Marth, N.J.; Shrestha, R.L.
1996-01-01
Elevation differences on the order of 10 cm within Florida's marsh system influence major variations in tidal flooding and in the associated plant communities. This low elevation gradient combined with sea level fluctuation of 5-to-10 cm over decadel and longer periods can generate significant alteration and erosion of marsh habitats along the Gulf Coast. Knowledge of precise and accurate elevations in the marsh is critical to the efficient monitoring and management of these habitats. Global positioning system (GPS) technology was employed to establish six new orthometric heights along the Gulf Coast from which kinematic surveys into the marsh interior are conducted. The vertical accuracy achieved using GPS technology was evaluated using two networks with 16 vertical and nine horizontal NGS published high accuracy positions. New positions were occupied near St. Marks National Wildlife Refuge and along the coastline of Levy County and Citrus County. Static surveys were conducted using four Ashtech dual frequency P-code receivers for 45-minute sessions and a data logging rate of 10 seconds. Network vector lengths ranged from 4 to 64 km and, including redundant baselines, totaled over 100 vectors. Analysis includes use of the GEOID93 model with a least squares network adjustment and reference to the National Geodetic Reference System (NGRS). The static surveys show high internal consistency and the desired centimeter-level accuracy is achieved for the local network. Uncertainties for the newly established vertical positions range from 0.8 cm to 1.8 cm at the 95% confidence level. These new positions provide sufficient vertical accuracy to achieve the project objectives of tying marsh surface elevations to long-term water level gauges recording sea level fluctuations along the coast.
Dutch population specific sex estimation formulae using the proximal femur.
Colman, K L; Janssen, M C L; Stull, K E; van Rijn, R R; Oostra, R J; de Boer, H H; van der Merwe, A E
2018-05-01
Sex estimation techniques are frequently applied in forensic anthropological analyses of unidentified human skeletal remains. While morphological sex estimation methods are able to endure population differences, the classification accuracy of metric sex estimation methods are population-specific. No metric sex estimation method currently exists for the Dutch population. The purpose of this study is to create Dutch population specific sex estimation formulae by means of osteometric analyses of the proximal femur. Since the Netherlands lacks a representative contemporary skeletal reference population, 2D plane reconstructions, derived from clinical computed tomography (CT) data, were used as an alternative source for a representative reference sample. The first part of this study assesses the intra- and inter-observer error, or reliability, of twelve measurements of the proximal femur. The technical error of measurement (TEM) and relative TEM (%TEM) were calculated using 26 dry adult femora. In addition, the agreement, or accuracy, between the dry bone and CT-based measurements was determined by percent agreement. Only reliable and accurate measurements were retained for the logistic regression sex estimation formulae; a training set (n=86) was used to create the models while an independent testing set (n=28) was used to validate the models. Due to high levels of multicollinearity, only single variable models were created. Cross-validated classification accuracies ranged from 86% to 92%. The high cross-validated classification accuracies indicate that the developed formulae can contribute to the biological profile and specifically in sex estimation of unidentified human skeletal remains in the Netherlands. Furthermore, the results indicate that clinical CT data can be a valuable alternative source of data when representative skeletal collections are unavailable. Copyright © 2017 Elsevier B.V. All rights reserved.
Oliver, D; Kotlicka-Antczak, M; Minichino, A; Spada, G; McGuire, P; Fusar-Poli, P
2018-03-01
Primary indicated prevention is reliant on accurate tools to predict the onset of psychosis. The gold standard assessment for detecting individuals at clinical high risk (CHR-P) for psychosis in the UK and many other countries is the Comprehensive Assessment for At Risk Mental States (CAARMS). While the prognostic accuracy of CHR-P instruments has been assessed in general, this is the first study to specifically analyse that of the CAARMS. As such, the CAARMS was used as the index test, with the reference index being psychosis onset within 2 years. Six independent studies were analysed using MIDAS (STATA 14), with a total of 1876 help-seeking subjects referred to high risk services (CHR-P+: n=892; CHR-P-: n=984). Area under the curve (AUC), summary receiver operating characteristic curves (SROC), quality assessment, likelihood ratios, and probability modified plots were computed, along with sensitivity analyses and meta-regressions. The current meta-analysis confirmed that the 2-year prognostic accuracy of the CAARMS is only acceptable (AUC=0.79 95% CI: 0.75-0.83) and not outstanding as previously reported. In particular, specificity was poor. Sensitivity of the CAARMS is inferior compared to the SIPS, while specificity is comparably low. However, due to the difficulties in performing these types of studies, power in this meta-analysis was low. These results indicate that refining and improving the prognostic accuracy of the CAARMS should be the mainstream area of research for the next era. Avenues of prediction improvement are critically discussed and presented to better benefit patients and improve outcomes of first episode psychosis. Copyright © 2017 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Study on the calibration and optimization of double theodolites baseline
NASA Astrophysics Data System (ADS)
Ma, Jing-yi; Ni, Jin-ping; Wu, Zhi-chao
2018-01-01
For the double theodolites measurement system baseline as the benchmark of the scale of the measurement system and affect the accuracy of the system, this paper puts forward a method for calibration and optimization of the double theodolites baseline. Using double theodolites to measure the known length of the reference ruler, and then reverse the baseline formula. Based on the error propagation law, the analyses show that the baseline error function is an important index to measure the accuracy of the system, and the reference ruler position, posture and so on have an impact on the baseline error. The optimization model is established and the baseline error function is used as the objective function, and optimizes the position and posture of the reference ruler. The simulation results show that the height of the reference ruler has no effect on the baseline error; the posture is not uniform; when the reference ruler is placed at x=500mm and y=1000mm in the measurement space, the baseline error is the smallest. The experimental results show that the experimental results are consistent with the theoretical analyses in the measurement space. In this paper, based on the study of the placement of the reference ruler, for improving the accuracy of the double theodolites measurement system has a reference value.
NASA Astrophysics Data System (ADS)
Zolot, A. M.; Giorgetta, F. R.; Baumann, E.; Swann, W. C.; Coddington, I.; Newbury, N. R.
2013-03-01
The Doppler-limited spectra of methane between 176 THz and 184 THz (5870-6130 cm-1) and acetylene between 193 THz and 199 THz (6430-6630 cm-1) are acquired via comb-tooth resolved dual comb spectroscopy with frequency accuracy traceable to atomic standards. A least squares analysis of the measured absorbance and phase line shapes provides line center frequencies with absolute accuracy of 0.2 MHz, or less than one thousandth of the room temperature Doppler width. This accuracy is verified through comparison with previous saturated absorption spectroscopy of 37 strong isolated lines of acetylene. For the methane spectrum, the center frequencies of 46 well-isolated strong lines are determined with similar high accuracy, along with the center frequencies for 1107 non-isolated lines at lower accuracy. The measured methane line-center frequencies have an uncertainty comparable to the few available laser heterodyne measurements in this region but span a much larger optical bandwidth, marking the first broad-band measurements of the methane 2ν3 region directly referenced to atomic frequency standards. This study demonstrates the promise of dual comb spectroscopy to obtain high resolution broadband spectra that are comparable to state-of-the-art Fourier-transform spectrometer measurements but with much improved frequency accuracy.Work of the US government, not subject to US copyright.
NASA Astrophysics Data System (ADS)
Reinartz, Peter; Müller, Rupert; Lehner, Manfred; Schroeder, Manfred
During the HRS (High Resolution Stereo) Scientific Assessment Program the French space agency CNES delivered data sets from the HRS camera system with high precision ancillary data. Two test data sets from this program were evaluated: one is located in Germany, the other in Spain. The first goal was to derive orthoimages and digital surface models (DSM) from the along track stereo data by applying the rigorous model with direct georeferencing and without ground control points (GCPs). For the derivation of DSM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera was used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data from positioning and attitude systems were extracted. A dense image matching, using nearly all pixels as kernel centers provided the parallaxes. The quality of the stereo tie points was controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection lead to points in object space which are subsequently interpolated to a DSM in a regular grid. DEM filtering methods were also applied and evaluations carried out differentiating between accuracies in forest and other areas. Additionally, orthoimages were generated from the images of the two stereo looking directions. The orthoimage and DSM accuracy was determined by using GCPs and available reference DEMs of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). As expected the results obtained without using GCPs showed a bias in the order of 5-20 m to the reference data for all three coordinates. By image matching it could be shown that the two independently derived orthoimages exhibit a very constant shift behavior. In a second step few GCPs (3-4) were used to calculate boresight alignment angles, introduced into the direct georeferencing process of each image independently. This method improved the absolute accuracy of the resulting orthoimages and DSM significantly.
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos; Lukashin, Constantine; Speth, Paul W.; Kopp, Gregg; Thome, Kurt; Wielicki, Bruce A.; Young, David F.
2014-01-01
The implementation of the Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission was recommended by the National Research Council in 2007 to provide an on-orbit intercalibration standard with accuracy of 0.3% (k = 2) for relevant Earth observing sensors. The goal of reference intercalibration, as established in the Decadal Survey, is to enable rigorous high-accuracy observations of critical climate change parameters, including reflected broadband radiation [Clouds and Earth's Radiant Energy System (CERES)], cloud properties [Visible Infrared Imaging Radiometer Suite (VIIRS)], and changes in surface albedo, including snow and ice albedo feedback. In this paper, we describe the CLARREO approach for performing intercalibration on orbit in the reflected solar (RS) wavelength domain. It is based on providing highly accurate spectral reflectance and reflected radiance measurements from the CLARREO Reflected Solar Spectrometer (RSS) to establish an on-orbit reference for existing sensors, namely, CERES and VIIRS on Joint Polar Satellite System satellites, Advanced Very High Resolution Radiometer and follow-on imagers on MetOp, Landsat imagers, and imagers on geostationary platforms. One of two fundamental CLARREO mission goals is to provide sufficient sampling of high-accuracy observations that are matched in time, space, and viewing angles with measurements made by existing instruments, to a degree that overcomes the random error sources from imperfect data matching and instrument noise. The data matching is achieved through CLARREO RSS pointing operations on orbit that align its line of sight with the intercalibrated sensor. These operations must be planned in advance; therefore, intercalibration events must be predicted by orbital modeling. If two competing opportunities are identified, one target sensor must be given priority over the other. The intercalibration method is to monitor changes in targeted sensor response function parameters: effective offset, gain, nonlinearity, optics spectral response, and sensitivity to polarization. In this paper, we use existing satellite data and orbital simulationmethods to determinemission requirements for CLARREO, its instrument pointing ability, methodology, and needed intercalibration sampling and data matching for accurate intercalibration of RS radiation sensors on orbit.
SCUD: fast structure clustering of decoys using reference state to remove overall rotation.
Li, Hongzhi; Zhou, Yaoqi
2005-08-01
We developed a method for fast decoy clustering by using reference root-mean-squared distance (rRMSD) rather than commonly used pairwise RMSD (pRMSD) values. For 41 proteins with 2000 decoys each, the computing efficiency increases nine times without a significant change in the accuracy of near-native selections. Tests on additional protein decoys based on different reference conformations confirmed this result. Further analysis indicates that the pRMSD and rRMSD values are highly correlated (with an average correlation coefficient of 0.82) and the clusters obtained from pRMSD and rRMSD values are highly similar (the representative structures of the top five largest clusters from the two methods are 74% identical). SCUD (Structure ClUstering of Decoys) with an automatic cutoff value is available at http://theory.med.buffalo.edu. (c) 2005 Wiley Periodicals, Inc.
Small arms mini-fire control system: fiber-optic barrel deflection sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajic, Slobodan; Datskos, Panos G
Traditionally the methods to increase firearms accuracy, particularly at distance, have concentrated on barrel isolation (free floating) and substantial barrel wall thickening to gain rigidity. This barrel stiffening technique did not completely eliminate barrel movement but the problem was significantly reduced to allow a noticeable accuracy enhancement. This process, although highly successful, came at a very high weight penalty. Obviously the goal would be to lighten the barrel (firearm), yet achieve even greater accuracy. Thus, if lightweight barrels could ultimately be compensated for both their static and dynamic mechanical perturbations, the result would be very accurate, yet significantly lighter weight,more » weapons. We discuss our development of a barrel reference sensor system that is designed to accomplish this ambitious goal. Our optical fiber-based sensor monitors the barrel muzzle position and autonomously compensates for any induced perturbations. The reticle is electronically adjusted in position to compensate for the induced barrel deviation in real time.« less
Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou
2015-09-21
Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Craymer, M. R.; Henton, J. A.; Piraszewski, M.
2008-12-01
Glacial isostatic adjustment following the last glacial period is the dominant source of crustal deformation in Canada east of the Rocky Mountains. The present-day vertical component of motion associated with this process may exceed 1 cm/y and is being directly measured with the Global Positioning System (GPS). A consequence of this steady deformation is that high accuracy coordinates at one epoch may not be compatible with those at another epoch. For example, modern precise point positioning (PPP) methods provide coordinates at the epoch of observation while NAD83, the officially adopted reference frame in Canada and the U.S., is expressed at some past reference epoch. The PPP positions are therefore incompatible with coordinates in such a realization of the reference frame and need to be propagated back to the frame's reference epoch. Moreover, the realizations of NAD83 adopted by the provincial geodetic agencies in Canada are referenced to different coordinate epochs; either 1997.0 or 2002.0. Proper comparison of coordinates between provinces therefore requires propagating them from one reference epoch to another. In an effort to reconcile PPP results and different realizations of NAD83, we empirically represent crustal deformation throughout Canada using a velocity field based solely on high accuracy continuous and episodic GPS observations. The continuous observations from 2001 to 2007 were obtained from nearly 100 permanent GPS stations, predominately operated by Natural Resources Canada (NRCan) and provincial geodetic agencies. Many of these sites are part of the International GNSS Service (IGS) global network. Episodic observations from 1994 to 2006 were obtained from repeated occupations of the Canadian Base Network (CBN), which consists of approximately 160 stable pillar-type monuments across the entire country. The CBN enables a much denser spatial sampling of crustal motions although coverage in the far north is still rather sparse. NRCan solutions of the continuous GPS data were combined with those from other agencies as part of the North American Reference Frame (NAREF) effort to improve the reliability of the results. This NAREF solution has then been combined with our CBN results to obtain a denser velocity sampling for fitting different types of surfaces in a first attempt to determine a continuous GPS velocity field for the entire country. Expressing this velocity field as a grid enables users to interpolate to any location in Canada, allowing for the propagation of coordinates to any desired reference epoch. We examine the accuracy and limitations of this GPS velocity field by comparing it to other published GPS velocity solutions (which are all based on less data) as well as to GIA models, including versions of ICE-3G, ICE-5G and the recent Stable North America Reference Frame (SNARF) model. Of course, the accuracy of the GPS velocity field depends directly on the density of the GPS coverage. Consequently, the GPS velocity field is unable to fully represent the actual GIA motion in the far north and tends to smooth out the signal due to the spatially sparse coverage. On the other hand, the model performs quite well in the southern parts of the country where there is a much greater spatial density of GPS measurements.
Performance of a new test strip for freestyle blood glucose monitoring systems.
Lock, John Paul; Brazg, Ronald; Bernstein, Robert M; Taylor, Elizabeth; Patel, Mona; Ward, Jeanne; Alva, Shridhara; Chen, Ting; Welsh, Zoë; Amor, Walter; Bhogal, Claire; Ng, Ronald
2011-01-01
a new strip, designed to enhance the ease of use and minimize interference of non-glucose sugars, has been developed to replace the current FreeStyle (Abbott Diabetes Care, Alameda, CA) blood glucose test strip. We evaluated the performance of this new strip. laboratory evaluation included precision, linearity, dynamic range, effects of operating temperature, humidity, altitude, hematocrit, interferents, and blood reapplication. System accuracy, lay user performance, and ease of use for finger capillary blood testing and accuracy for venous blood testing were evaluated at clinics. Lay users also compared the speed and ease of use between the new strip and the current FreeStyle strip. for glucose concentrations <75 mg/dL, 73%, 100%, and 100% of the individual capillary blood glucose results obtained by lay users fell within ± 5, 10, and 15 mg/dL, respectively, of the reference. For glucose concentrations ≥75 mg/dL, 68%, 95%, 99%, and 99% of the lay user results fell within ± 5%, 10%, 15%, and 20%, respectively, of the reference. Comparable accuracy was obtained in the venous blood study. Lay users found the new test strip easy to use and faster and easier to use than the current FreeStyle strip. The new strip maintained accuracy under various challenging conditions, including high concentrations of various interferents, sample reapplication up to 60 s, and extremes in hematocrit, altitude, and operating temperature and humidity. our results demonstrated excellent accuracy of the new FreeStyle test strip and validated the improvements in minimizing interference and enhancing ease of use.
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
Schreiner, Markus M; Platzgummer, Hannes; Unterhumer, Sylvia; Weber, Michael; Mistelbauer, Gabriel; Loewe, Christian; Schernthaner, Ruediger E
2017-08-01
To investigate radiation exposure, objective image quality, and the diagnostic accuracy of a BMI-adjusted ultra-low-dose CT angiography (CTA) protocol for the assessment of peripheral arterial disease (PAD), with digital subtraction angiography (DSA) as the standard of reference. In this prospective, IRB-approved study, 40 PAD patients (30 male, mean age 72 years) underwent CTA on a dual-source CT scanner at 80kV tube voltage. The reference amplitude for tube current modulation was personalized based on the body mass index (BMI) with 120 mAs for [BMI≤25] or 150 mAs for [25
Maas, E T; Juch, J N S; Ostelo, R W J G; Groeneweg, J G; Kallewaard, J W; Koes, B W; Verhagen, A P; Huygen, F J P M; van Tulder, M W
2017-03-01
Patient history and physical examination are frequently used procedures to diagnose chronic low back pain (CLBP) originating from the facet joints, although the diagnostic accuracy is controversial. The aim of this systematic review is to determine the diagnostic accuracy of patient history and/or physical examination to identify CLBP originating from the facet joints using diagnostic blocks as reference standard. We searched MEDLINE, EMBASE, CINAHL, Web of Science and the Cochrane Collaboration database from inception until June 2016. Two review authors independently selected studies for inclusion, extracted data and assessed the risk of bias. We calculated sensitivity and specificity values, with 95% confidence intervals (95% CI). Twelve studies were included, in which 129 combinations of index tests and reference standards were presented. Most of these index tests have only been evaluated in single studies with a high risk of bias. Four studies evaluated the diagnostic accuracy of the Revel's criteria combination. Because of the clinical heterogeneity, results were not pooled. The published sensitivities ranged from 0.11 (95% CI 0.02-0.29) to 1.00 (95% CI 0.75-1.00), and the specificities ranged from 0.66 (95% CI 0.46-0.82) to 0.91 (95% CI 0.83-0.96). Due to clinical heterogeneity, the evidence for the diagnostic accuracy of patient history and/or physical examination to identify facet joint pain is inconclusive. Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. © 2016 European Pain Federation - EFIC®.
NASA Astrophysics Data System (ADS)
Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet
2017-06-01
High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at
Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
Standard Reference Specimens in Quality Control of Engineering Surfaces
Song, J. F.; Vorburger, T. V.
1991-01-01
In the quality control of engineering surfaces, we aim to understand and maintain a good relationship between the manufacturing process and surface function. This is achieved by controlling the surface texture. The control process involves: 1) learning the functional parameters and their control values through controlled experiments or through a long history of production and use; 2) maintaining high accuracy and reproducibility with measurements not only of roughness calibration specimens but also of real engineering parts. In this paper, the characteristics, utilizations, and limitations of different classes of precision roughness calibration specimens are described. A measuring procedure of engineering surfaces, based on the calibration procedure of roughness specimens at NIST, is proposed. This procedure involves utilization of check specimens with waveform, wavelength, and other roughness parameters similar to functioning engineering surfaces. These check specimens would be certified under standardized reference measuring conditions, or by a reference instrument, and could be used for overall checking of the measuring procedure and for maintaining accuracy and agreement in engineering surface measurement. The concept of “surface texture design” is also suggested, which involves designing the engineering surface texture, the manufacturing process, and the quality control procedure to meet the optimal functional needs. PMID:28184115
Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin
2018-04-01
Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.
van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M
2017-11-27
Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .
Remote Determination of the in situ Sensitivity of a Streckeisen STS-2 Broadband Seismometer
NASA Astrophysics Data System (ADS)
Uhrhammer, R. A.; Taira, T.; Hellweg, M.
2015-12-01
The sensitivity of a STS-2 broadband seismometer can be determined remotely by two basic methods: 1) via comparison of the inferred ground motions with a reference seismometer, and: 2) via excitation of the calibration coil with a simultaneously recorded stimulus signal. The first method is limited by the accuracy of the reference seismometer and the second method is limited by the accuracy of the motor constant (Gc) of the calibration coil. The accuracy of both methods is also influenced by the signal-to-noise ratio (SNR) in the presence of background seismic noise and the degree of orthogonality of the tri-axial suspension in the STS-2 seismometer. The Streckeisen STS-2 manual states that the signal coil sensitivity (Gs) is 1500 V/(m/s) (+/-1.5%) and it gives Gc to only one decimal place (ie, Gc = 2 g/A). Unfortunately the factory Gc value is not given with sufficient accuracy to be useful for determining the sensitivity of Gs to within 1.5%. Thus we need to determine Gc to enable accurate calibration of the STS-2 via remote excitation of the Gc with a known stimulus. The Berkeley Digital Seismic Network (BDSN) has 12 STS-2 seismometers with co-sited reference sensors (strong motion accelerometers) and they are all recorded by Q330HR data loggers with factory cabling. The procedure is to first verify the sensitivity of the STS-2 signal coils (Gs) via comparison of the ground motions recorded by the STS-2 with the ground motions recorded by the co-sited strong motion accelerometer for an earthquake with has sufficiently high SNR in a passband common to both sensors. The second step in the procedure is to remotely (from Berkeley) excite to calibration coil with a 1 Hz sinusoid which is simultaneously recorded and, using the above measured Gs values, solve for Gc of the calibration coils. The resulting Gc values are typically 2.20-2.50 g/A (accurate to 3+ decimal places) and once the Gc values are found, the STS-2 absolute sensitivity can be determined remotely to an accuracy of better than 1%. The primary advantage of using strong motion accelerometers as the reference instrument is that their absolute calibration can be checked via tilt tests if the need arises.
Achieving Climate Change Absolute Accuracy in Orbit
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.;
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
Mistry, Binoy; Stewart De Ramirez, Sarah; Kelen, Gabor; Schmitz, Paulo S K; Balhara, Kamna S; Levin, Scott; Martinez, Diego; Psoter, Kevin; Anton, Xavier; Hinson, Jeremiah S
2018-05-01
We assess accuracy and variability of triage score assignment by emergency department (ED) nurses using the Emergency Severity Index (ESI) in 3 countries. In accordance with previous reports and clinical observation, we hypothesize low accuracy and high variability across all sites. This cross-sectional multicenter study enrolled 87 ESI-trained nurses from EDs in Brazil, the United Arab Emirates, and the United States. Standardized triage scenarios published by the Agency for Healthcare Research and Quality (AHRQ) were used. Accuracy was defined by concordance with the AHRQ key and calculated as percentages. Accuracy comparisons were made with one-way ANOVA and paired t test. Interrater reliability was measured with Krippendorff's α. Subanalyses based on nursing experience and triage scenario type were also performed. Mean accuracy pooled across all sites and scenarios was 59.2% (95% confidence interval [CI] 56.4% to 62.0%) and interrater reliability was modest (α=.730; 95% CI .692 to .767). There was no difference in overall accuracy between sites or according to nurse experience. Medium-acuity scenarios were scored with greater accuracy (76.4%; 95% CI 72.6% to 80.3%) than high- or low-acuity cases (44.1%, 95% CI 39.3% to 49.0% and 54%, 95% CI 49.9% to 58.2%), and adult scenarios were scored with greater accuracy than pediatric ones (66.2%, 95% CI 62.9% to 69.7% versus 46.9%, 95% CI 43.4% to 50.3%). In this multinational study, concordance of nurse-assigned ESI score with reference standard was universally poor and variability was high. Although the ESI is the most popular ED triage tool in the United States and is increasingly used worldwide, our findings point to a need for more reliable ED triage tools. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Evans, P; Fairman, B
2001-10-01
Reliable trace metal analysis of environmental samples is dependent upon the availability of high accuracy, matrix reference standards. Here, we present Cd, Cu, Ni, Pb and Zn isotope dilution determination for an estuary water certified reference material (LGC 6016). This work highlights the need for high-accuracy techniques in the development of trace element CRMs rather than conventional inter-laboratory trials. Certification of the estuary water LGC6016 was initially determined from a consensus mean from 14 laboratories but this was found to be unsatisfactory due to the large discrepancies in the reported concentrations. The material was re-analysed using isotope dilution ICP-MS techniques. Pb and Cd were determined using a conventional quadrupole ICP-MS (Elan 5000). Cu, Zn and Ni were determined using a magnetic sector ICP-MS (Finnigan Element), which allowed significant polyatomic interferences to be overcome. Using the magnetic sector instrument, precise mass calibration to within 0.02 amu permitted identification of the interferences. Most interferences derived from the sample matrix. For example, the high Na content causes interferences on 63Cu, due to the formation of 40Ar23Na and 23Na2 16O1H, which in a conventional quadrupole instrument would relate to an erroneous increase in signal intensity by up to 20%. For each analyte a combined uncertainty calculation was performed following the Eurachem/GTAC and ISO guideline. For each element a combined uncertainty of 2-3% was found, which represents a 10-fold improvement compared to certification by inter-laboratory comparison. Analysis of the combined uncertainty budget indicates that the majority of systematic uncertainty derives from the instrumental isotope ratio measurements.
High symptom reporters are less interoceptively accurate in a symptom-related context.
Bogaerts, Katleen; Millen, An; Li, Wan; De Peuter, Steven; Van Diest, Ilse; Vlemincx, Elke; Fannes, Stien; Van den Bergh, Omer
2008-11-01
We investigated the role of a symptom interpretation frame on the accuracy of interoception and on retrospective symptom reporting in nonclinical high and low reporters of medically unexplained symptoms. All participants (N=74) went through two subsequent trials of the Rebreathing Test, inducing altered respiration and other physical sensations as a result of a gradually increasing pCO(2) level in the blood. Each trial consisted of a baseline (60 s), a rebreathing phase (150 s), and a recovery phase (150 s). In one trial, the sensations were framed in a neutral way ("the gas mixture might alter breathing behavior and induce respiratory sensations"). In the other trial, a symptom frame was induced ("the gas mixture might alter breathing behavior and induce respiratory symptoms"). Breathing behavior was continuously monitored, subjective sensations were rated every 10 s, and after each trial, participants filled out a symptom checklist. Within-subject correlations between the subjective rating and its physiological referent were calculated for the rebreathing phase and recovery phase of each trial separately. High symptom reporters had more (retrospective) complaints than low symptom reporters, especially in the symptom trial. Only in the symptom frame were high symptom reporters less accurate than low symptom reporters. The reduction in interoceptive accuracy (IA) in high symptom reporters was most striking in the recovery phase of the symptom frame trial. A contextual cue, such as a reference to symptoms, reduced IA in high symptom reporters and this was more so during recovery from the symptom induction.
Next-generation genotype imputation service and methods.
Das, Sayantan; Forer, Lukas; Schönherr, Sebastian; Sidore, Carlo; Locke, Adam E; Kwong, Alan; Vrieze, Scott I; Chew, Emily Y; Levy, Shawn; McGue, Matt; Schlessinger, David; Stambolian, Dwight; Loh, Po-Ru; Iacono, William G; Swaroop, Anand; Scott, Laura J; Cucca, Francesco; Kronenberg, Florian; Boehnke, Michael; Abecasis, Gonçalo R; Fuchsberger, Christian
2016-10-01
Genotype imputation is a key component of genetic association studies, where it increases power, facilitates meta-analysis, and aids interpretation of signals. Genotype imputation is computationally demanding and, with current tools, typically requires access to a high-performance computing cluster and to a reference panel of sequenced genomes. Here we describe improvements to imputation machinery that reduce computational requirements by more than an order of magnitude with no loss of accuracy in comparison to standard imputation tools. We also describe a new web-based service for imputation that facilitates access to new reference panels and greatly improves user experience and productivity.
Modernization of Koesters interferometer and high accuracy calibration gauge blocks
NASA Astrophysics Data System (ADS)
França, R. S.; Silva, I. L. M.; Couceiro, I. B.; Torres, M. A. C.; Bessa, M. S.; Costa, P. A.; Oliveira, W., Jr.; Grieneisen, H. P. H.
2016-07-01
The Optical Metrology Division (Diopt) of Inmetro is responsible for maintaining the national reference of the length unit according to International System of Units (SI) definitions. The length unit is realized by interferometric techniques and is disseminated to the dimensional community through calibrations of gauge blocks. Calibration of large gauge blocks from 100 mm to 1000 mm has been performed by Diopt with a Koesters interferometer with reference to spectral lines of a krypton discharge lamp. Replacement of this lamp by frequency stabilized lasers, traceable now to the time and frequency scale, is described and the first results are reported.
Reconstruction method for fringe projection profilometry based on light beams.
Li, Xuexing; Zhang, Zhijiang; Yang, Chen
2016-12-01
A novel reconstruction method for fringe projection profilometry, based on light beams, is proposed and verified by experiments. Commonly used calibration techniques require the parameters of projector calibration or the reference planes placed in many known positions. Obviously, introducing the projector calibration can reduce the accuracy of the reconstruction result, and setting the reference planes to many known positions is a time-consuming process. Therefore, in this paper, a reconstruction method without projector's parameters is proposed and only two reference planes are introduced. A series of light beams determined by the subpixel point-to-point map on the two reference planes combined with their reflected light beams determined by the camera model are used to calculate the 3D coordinates of reconstruction points. Furthermore, the bundle adjustment strategy and the complementary gray-code phase-shifting method are utilized to ensure the accuracy and stability. Qualitative and quantitative comparisons as well as experimental tests demonstrate the performance of our proposed approach, and the measurement accuracy can reach about 0.0454 mm.
Field comparison of several commercially available radon detectors.
Field, R W; Kross, B C
1990-01-01
To determine the accuracy and precision of commercially available radon detectors in a field setting, 15 detectors from six companies were exposed to radon and compared to a reference radon level. The detectors from companies that had already passed National Radon Measurement Proficiency Program testing had better precision and accuracy than those detectors awaiting proficiency testing. Charcoal adsorption detectors and diffusion barrier charcoal adsorption detectors performed very well, and the latter detectors displayed excellent time averaging ability. Alternatively, charcoal liquid scintillation detectors exhibited acceptable accuracy but poor precision, and bare alpha registration detectors showed both poor accuracy and precision. The mean radon level reported by the bare alpha registration detectors was 68 percent lower than the radon reference level. PMID:2368851
Estimation of diagnostic test accuracy without full verification: a review of latent class methods
Collins, John; Huynh, Minh
2014-01-01
The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172
Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.
Uemoto, Y; Osawa, T; Saburi, J
2017-03-01
This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.
Localizing Ground Penetrating RADAR: A Step Towards Robust Autonomous Ground Vehicle Localization
2015-05-27
truth reference unit is coupled with a local base station that allows local 2cm accuracy location measurements. The RT3003 uses a MEMS -based IMU and...of different electromagnetic properties; for example the interface between soil and pipes , roots, or rocks. However, it is not these discrete...depth is determined by soil losses caused by Joule heating and dipole losses. High conductivity soils, such as those with high moisture and salinity
Absolute metrology for space interferometers
NASA Astrophysics Data System (ADS)
Salvadé, Yves; Courteville, Alain; Dändliker, René
2017-11-01
The crucial issue of space-based interferometers is the laser interferometric metrology systems to monitor with very high accuracy optical path differences. Although classical high-resolution laser interferometers using a single wavelength are well developed, this type of incremental interferometer has a severe drawback: any interruption of the interferometer signal results in the loss of the zero reference, which requires a new calibration, starting at zero optical path difference. We propose in this paper an absolute metrology system based on multiplewavelength interferometry.
Neural Network Compensation for Frequency Cross-Talk in Laser Interferometry
NASA Astrophysics Data System (ADS)
Lee, Wooram; Heo, Gunhaeng; You, Kwanho
The heterodyne laser interferometer acts as an ultra-precise measurement apparatus in semiconductor manufacture. However the periodical nonlinearity property caused from frequency cross-talk is an obstacle to improve the high measurement accuracy in nanometer scale. In order to minimize the nonlinearity error of the heterodyne interferometer, we propose a frequency cross-talk compensation algorithm using an artificial intelligence method. The feedforward neural network trained by back-propagation compensates the nonlinearity error and regulates to minimize the difference with the reference signal. With some experimental results, the improved accuracy is proved through comparison with the position value from a capacitive displacement sensor.
Innovative use of global navigation satellite systems for flight inspection
NASA Astrophysics Data System (ADS)
Kim, Eui-Ho
The International Civil Aviation Organization (ICAO) mandates flight inspection in every country to provide safety during flight operations. Among many criteria of flight inspection, airborne inspection of Instrument Landing Systems (ILS) is very important because the ILS is the primary landing guidance system worldwide. During flight inspection of the ILS, accuracy in ILS landing guidance is checked by using a Flight Inspection System (FIS). Therefore, a flight inspection system must have high accuracy in its positioning capability to detect any deviation so that accurate guidance of the ILS can be maintained. Currently, there are two Automated Flight Inspection Systems (AFIS). One is called Inertial-based AFIS, and the other one is called Differential GPS-based (DGPS-based) AFIS. The Inertial-based AFIS enables efficient flight inspection procedures, but its drawback is high cost because it requires a navigation-grade Inertial Navigation System (INS). On the other hand, the DGPS-based AFIS has relatively low cost, but flight inspection procedures require landing and setting up a reference receiver. Most countries use either one of the systems based on their own preferences. There are around 1200 ILS in the U.S., and each ILS must be inspected every 6 to 9 months. Therefore, it is important to manage the airborne inspection of the ILS in a very efficient manner. For this reason, the Federal Aviation Administration (FAA) mainly uses the Inertial-based AFIS, which has better efficiency than the DGPS-based AFIS in spite of its high cost. Obviously, the FAA spends tremendous resources on flight inspection. This thesis investigates the value of GPS and the FAA's augmentation to GPS for civil aviation called the Wide Area Augmentation System (or WAAS) for flight inspection. Because standard GPS or WAAS position outputs cannot meet the required accuracy for flight inspection, in this thesis, various algorithms are developed to improve the positioning ability of Flight Inspection Systems (FIS) by using GPS and WAAS in novel manners. The algorithms include Adaptive Carrier Smoothing (ACS), optimizing WAAS accuracy and stability, and reference point-based precise relative positioning for real-time and near-real-time applications. The developed systems are WAAS-aided FIS, WAAS-based FIS, and stand-alone GPS-based FIS. These systems offer both high efficiency and low cost, and they have different advantages over one another in terms of accuracy, integrity, and worldwide availability. The performance of each system is tested with experimental flight test data and shown to have accuracy that is sufficient for flight inspection and superior to the current Inertial-based AFIS.
Validating long-term satellite-derived disturbance products: the case of burned areas
NASA Astrophysics Data System (ADS)
Boschetti, L.; Roy, D. P.
2015-12-01
The potential research, policy and management applications of satellite products place a high priority on providing statements about their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Climate Data Record. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted that is characterized by the selection of reference data via a probability sampling that can subsequently be used to compute accuracy metrics, taking into account the sampling probability. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for the validation of global products that are highly variable in time and space (e.g. snow, floods or other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost of collecting independent reference data. We propose a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space. To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn by probability sampling. The proposed sampling design is a stratified random sampling, with two-level stratification of the voxels based on biomes and fire activity (Figure 1). The novel validation approach, used for the validation of the MODIS and forthcoming VIIRS global burned area products, is a general one, and could be used for the validation of other global products that are highly variable in space and time and is required to assess the accuracy of climate records. The approach is demonstrated using a 1 year dataset of MODIS fire products.
2013-01-01
Schroder et al., 2003). For example, Schroder et al. (2003) suggested that high-frequency events were less sa- lient, which may cause people to forget...collection methods that may increase reporting accuracy. REFERENCES Allen, S., Meinzen-Derr, J., Kautzman, M., Zulu , I., Trask, S., Fideli, U., et al. (2003
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Baker, J. T.; Ward, Benjamin; Sanchez, Anthony D.; Culpepper, Mark A.; Pilkington, D.; Spring, Justin; Nelson, Douglas J.; Lu, Chunte A.
2006-08-01
A novel high accuracy all electronic technique for phase locking arrays of optical fibers is demonstrated. We report the first demonstration of the only electronic phase locking technique that doesn't require a reference beam. The measured phase error is λ/20. Excellent phase locking has been demonstrated for fiber amplifier arrays.
2012-01-01
precision and accuracy. For instance, in international time metrology, two-way satellite time and frequency transfer ( TWSTFT ) (see e.g. [1] and...can act as a time transfer system that is complementary to other high quality systems such as TWSTFT and GPS. REFERENCES [1] J. Levine. “A
Technical editing of research reports in biomedical journals.
Wager, Elizabeth; Middleton, Philippa
2008-10-08
Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no difference was seen in the quality of abstracts in one randomised controlled trial. Structuring generally improved the quality of abstracts, but increased their length. The reference accuracy studies showed a median citation error rate of 38% and a median quotation error rate of 20%. Surprisingly few studies have evaluated the effects of technical editing rigorously. However there is some evidence that the 'package' of technical editing used by biomedical journals does improve papers. A substantial number of references in biomedical articles are cited or quoted inaccurately.
Manikandan, A.; Biplab, Sarkar; David, Perianayagam A.; Holla, R.; Vivek, T. R.; Sujatha, N.
2011-01-01
For high dose rate (HDR) brachytherapy, independent treatment verification is needed to ensure that the treatment is performed as per prescription. This study demonstrates dosimetric quality assurance of the HDR brachytherapy using a commercially available two-dimensional ion chamber array called IMatriXX, which has a detector separation of 0.7619 cm. The reference isodose length, step size, and source dwell positional accuracy were verified. A total of 24 dwell positions, which were verified for positional accuracy gave a total error (systematic and random) of –0.45 mm, with a standard deviation of 1.01 mm and maximum error of 1.8 mm. Using a step size of 5 mm, reference isodose length (the length of 100% isodose line) was verified for single and multiple catheters of same and different source loadings. An error ≤1 mm was measured in 57% of tests analyzed. Step size verification for 2, 3, 4, and 5 cm was performed and 70% of the step size errors were below 1 mm, with maximum of 1.2 mm. The step size ≤1 cm could not be verified by the IMatriXX as it could not resolve the peaks in dose profile. PMID:21897562
Boer, Kimberly R.; Dyserinck, Heleen C.; Büscher, Philippe; Schallig, Henk D. H. F.; Leeflang, Mariska M. G.
2012-01-01
Background A range of molecular amplification techniques have been developed for the diagnosis of Human African Trypanosomiasis (HAT); however, careful evaluation of these tests must precede implementation to ensure their high clinical accuracy. Here, we investigated the diagnostic accuracy of molecular amplification tests for HAT, the quality of articles and reasons for variation in accuracy. Methodology Data from studies assessing diagnostic molecular amplification tests were extracted and pooled to calculate accuracy. Articles were included if they reported sensitivity and specificity or data whereby values could be calculated. Study quality was assessed using QUADAS and selected studies were analysed using the bivariate random effects model. Results 16 articles evaluating molecular amplification tests fulfilled the inclusion criteria: PCR (n = 12), NASBA (n = 2), LAMP (n = 1) and a study comparing PCR and NASBA (n = 1). Fourteen articles, including 19 different studies were included in the meta-analysis. Summary sensitivity for PCR on blood was 99.0% (95% CI 92.8 to 99.9) and the specificity was 97.7% (95% CI 93.0 to 99.3). Differences in study design and readout method did not significantly change estimates although use of satellite DNA as a target significantly lowers specificity. Sensitivity and specificity of PCR on CSF for staging varied from 87.6% to 100%, and 55.6% to 82.9% respectively. Conclusion Here, PCR seems to have sufficient accuracy to replace microscopy where facilities allow, although this conclusion is based on multiple reference standards and a patient population that was not always representative. Future studies should, therefore, include patients for which PCR may become the test of choice and consider well designed diagnostic accuracy studies to provide extra evidence on the value of PCR in practice. Another use of PCR for control of disease could be to screen samples collected from rural areas and test in reference laboratories, to spot epidemics quickly and direct resources appropriately. PMID:22253934
Mugasa, Claire M; Adams, Emily R; Boer, Kimberly R; Dyserinck, Heleen C; Büscher, Philippe; Schallig, Henk D H F; Leeflang, Mariska M G
2012-01-01
A range of molecular amplification techniques have been developed for the diagnosis of Human African Trypanosomiasis (HAT); however, careful evaluation of these tests must precede implementation to ensure their high clinical accuracy. Here, we investigated the diagnostic accuracy of molecular amplification tests for HAT, the quality of articles and reasons for variation in accuracy. Data from studies assessing diagnostic molecular amplification tests were extracted and pooled to calculate accuracy. Articles were included if they reported sensitivity and specificity or data whereby values could be calculated. Study quality was assessed using QUADAS and selected studies were analysed using the bivariate random effects model. 16 articles evaluating molecular amplification tests fulfilled the inclusion criteria: PCR (n = 12), NASBA (n = 2), LAMP (n = 1) and a study comparing PCR and NASBA (n = 1). Fourteen articles, including 19 different studies were included in the meta-analysis. Summary sensitivity for PCR on blood was 99.0% (95% CI 92.8 to 99.9) and the specificity was 97.7% (95% CI 93.0 to 99.3). Differences in study design and readout method did not significantly change estimates although use of satellite DNA as a target significantly lowers specificity. Sensitivity and specificity of PCR on CSF for staging varied from 87.6% to 100%, and 55.6% to 82.9% respectively. Here, PCR seems to have sufficient accuracy to replace microscopy where facilities allow, although this conclusion is based on multiple reference standards and a patient population that was not always representative. Future studies should, therefore, include patients for which PCR may become the test of choice and consider well designed diagnostic accuracy studies to provide extra evidence on the value of PCR in practice. Another use of PCR for control of disease could be to screen samples collected from rural areas and test in reference laboratories, to spot epidemics quickly and direct resources appropriately.
Developmental Changes in Cross-Situational Word Learning: The Inverse Effect of Initial Accuracy
ERIC Educational Resources Information Center
Fitneva, Stanka A.; Christiansen, Morten H.
2017-01-01
Intuitively, the accuracy of initial word-referent mappings should be positively correlated with the outcome of learning. Yet recent evidence suggests an inverse effect of initial accuracy in adults, whereby greater accuracy of initial mappings is associated with poorer outcomes in a cross-situational learning task. Here, we examine the impact of…
Validation of the MODIS Collection 6 MCD64 Global Burned Area Product
NASA Astrophysics Data System (ADS)
Boschetti, L.; Roy, D. P.; Giglio, L.; Stehman, S. V.; Humber, M. L.; Sathyachandran, S. K.; Zubkova, M.; Melchiorre, A.; Huang, H.; Huo, L. Z.
2017-12-01
The research, policy and management applications of satellite products place a high priority on rigorously assessing their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Essential Climate Variable. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted, characterized by the selection of reference data via probability sampling. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for other products that are highly variable in time and space (e.g. snow, floods, other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost and limited availability of independent reference data. This paper describes the validation procedure adopted for the latest Collection 6 version of the MODIS Global Burned Area product (MCD64, Giglio et al, 2009). We used a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space (Boschetti et al, 2016). To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn through stratified random sampling. The novel sampling approach was used for the selection of a reference dataset consisting of 700 Landsat 8 image pairs, interpreted according to the CEOS Burned Area Validation Protocol (Boschetti et al., 2009). Standard quantitative burned area product accuracy measures that are important for different types of fire users (Boschetti et al, 2016, Roy and Boschetti, 2009, Boschetti et al, 2004) are computed to characterize the accuracy of the MCD64 product.
Storey, Helen L.; Huang, Ying; Crudder, Chris; Golden, Allison; de los Santos, Tala; Hawkins, Kenneth
2015-01-01
Novel typhoid diagnostics currently under development have the potential to improve clinical care, surveillance, and the disease burden estimates that support vaccine introduction. Blood culture is most often used as the reference method to evaluate the accuracy of new typhoid tests; however, it is recognized to be an imperfect gold standard. If no single gold standard test exists, use of a composite reference standard (CRS) can improve estimation of diagnostic accuracy. Numerous studies have used a CRS to evaluate new typhoid diagnostics; however, there is no consensus on an appropriate CRS. In order to evaluate existing tests for use as a reference test or inclusion in a CRS, we performed a systematic review of the typhoid literature to include all index/reference test combinations observed. We described the landscape of comparisons performed, showed results of a meta-analysis on the accuracy of the more common combinations, and evaluated sources of variability based on study quality. This wide-ranging meta-analysis suggests that no single test has sufficiently good performance but some existing diagnostics may be useful as part of a CRS. Additionally, based on findings from the meta-analysis and a constructed numerical example demonstrating the use of CRS, we proposed necessary criteria and potential components of a typhoid CRS to guide future recommendations. Agreement and adoption by all investigators of a standardized CRS is requisite, and would improve comparison of new diagnostics across independent studies, leading to the identification of a better reference test and improved confidence in prevalence estimates. PMID:26566275
Evaluation of a Home Biomonitoring Autonomous Mobile Robot.
Dorronzoro Zubiete, Enrique; Nakahata, Keigo; Imamoglu, Nevrez; Sekine, Masashi; Sun, Guanghao; Gomez, Isabel; Yu, Wenwei
2016-01-01
Increasing population age demands more services in healthcare domain. It has been shown that mobile robots could be a potential solution to home biomonitoring for the elderly. Through our previous studies, a mobile robot system that is able to track a subject and identify his daily living activities has been developed. However, the system has not been tested in any home living scenarios. In this study we did a series of experiments to investigate the accuracy of activity recognition of the mobile robot in a home living scenario. The daily activities tested in the evaluation experiment include watching TV and sleeping. A dataset recorded by a distributed distance-measuring sensor network was used as a reference to the activity recognition results. It was shown that the accuracy is not consistent for all the activities; that is, mobile robot could achieve a high success rate in some activities but a poor success rate in others. It was found that the observation position of the mobile robot and subject surroundings have high impact on the accuracy of the activity recognition, due to the variability of the home living daily activities and their transitional process. The possibility of improvement of recognition accuracy has been shown too.
NIST TXR Validation of S-HIS radiances and a UW-SSEC Blackbody
NASA Astrophysics Data System (ADS)
Taylor, J. K.; O'Connell, J.; Rice, J. P.; Revercomb, H. E.; Best, F. A.; Tobin, D. C.; Knuteson, R. O.; Adler, D. P.; Ciganovich, N. C.; Dutcher, S. T.; Laporte, D. D.; Ellington, S. D.; Werner, M. W.; Garcia, R. K.
2007-12-01
The ability to accurately validate infrared spectral radiances measured from space by direct comparison with airborne spectrometer radiances was first demonstrated using the Scanning High-resolution Interferometer Sounder (S-HIS) aircraft instrument flown under the AIRS on the NASA Aqua spacecraft in 2002 with subsequent successful comparisons in 2004 and 2006. The comparisons span a range of conditions, including arctic and tropical atmospheres, daytime and nighttime, and ocean and land surfaces. Similar comprehensive and successful comparisons have also been conducted with S-HIS for the MODIS sensors, the Tropospheric Emission Spectrometer (TES), and most recently the MetOp Infrared Atmospheric Sounding Interferometer (IASI). These comparisons are part of a larger picture that already shows great progress toward transforming our ability to make, and verify, highly accurate spectral radiance observations from space. A key challenge, especially for climate, is to carefully define the absolute accuracy of satellite radiances. Our vision of the near-term future of spectrally resolved infrared radiance observation includes a new space-borne mission that provides benchmark observations of the emission spectrum for climate. This concept, referred to as the CLimate Absolute Radiance and REfractivity Observatory (CLARREO) in the recent NRC Decadal Survey provides more complete spectral and time-of-day coverage and would fly basic physical standards to eliminate the need to assume on-board reference stability. Therefore, the spectral radiances from this mission will also serve as benchmarks to propagate a highly accurate calibration to other space-borne IR instruments. For the current approach of calibrating infrared flight sensors, in which thermal vacuum tests are conducted before launch and stability is assumed after launch, in-flight calibration validation is essential for highly accurate applications. At present, airborne observations provide the only source of direct radiance validation with resulting traceable uncertainties approaching the level required for remote sensing and climate applications (0.1 K 3- sigma). For the calibration validation process to be accurate, repeatable, and meaningful, the reference instrument must be extremely well characterized and understood, carefully maintained, and accurately calibrated, with the calibration accuracy of the reference instrument tied to absolute standards. Tests of the S-HIS absolute calibration have been conducted using the NIST transfer radiometer (TXR). The TXR provides a more direct connection to the Blackbody reference sources maintained by NIST than the normal traceability of blackbody temperature scales and paint emissivity measurements. Two basic tests were conducted: (1) comparison of radiances measured by the S-HIS to those from the TXR, and (2) measuring the reflectivity of a UW-SSEC blackbody by using the TXR as a stable detector. Preliminary results from both tests are very promising for confirming and refining the expected absolute accuracy of the S-HIS.
Development of Argon Isotope Reference Standards for the U.S. Geological Survey
Miiller, Archie P.
2006-01-01
The comparison of physical ages of geological materials measured by laboratories engaged in geochronological studies has been limited by the accuracy of mineral standards or monitors for which reported ages have differed by as much as 2 %. In order to address this problem, the U.S. Geological Survey is planning to calibrate the conventional 40Ar/40K age of a new preparation of an international hornblende standard labeled MMhb-2. The 40K concentration in MMhb-2 has already been determined by the Analytical Chemistry Division at NIST with an uncertainty of 0.2 %. The 40Ar concentration will be measured by the USGS using the argon isotope reference standards that were recently developed by NIST and are described in this paper. The isotope standards were constructed in the form of pipette/reservoir systems and calibrated by gas expansion techniques to deliver small high-precision aliquots of high-purity argon. Two of the pipette systems will deliver aliquots of 38Ar having initial molar quantities of 1.567 × 10−10 moles and 2.313 × 10−10 moles with expanded (k = 2) uncertainties of 0.058 % and 0.054 %, respectively. Three other pipette systems will deliver aliquots (nominally 4 × 10−10 moles) of 40Ar:36Ar artificial mixtures with similar accuracy and with molar ratios of 0.9974 ± 0.06 %, 29.69 ± 0.06 %, and 285.7 ± 0.08 % (k = 2). These isotope reference standards will enable the USGS to measure the 40Ar concentration in MMhb-2 with an expanded uncertainty of ≈ 0.1 %. In the process of these measurements, the USGS will re-determine the isotopic composition of atmospheric Ar and calculate a new value for its atomic weight. Upon completion of the USGS calibrations, the MMhb-2 mineral standard will be certified by NIST for its K and Ar concentrations and distributed as a Standard Reference Material (SRM). The new SRM and the NIST-calibrated transportable pipette systems have the potential for dramatically improving the accuracy of interlaboratory calibrations and thereby the measured ages of geological materials, by as much as a factor of ten. PMID:27274937
Spatial Patterns of NLCD Land Cover Change Thematic Accuracy (2001 - 2011)
Research on spatial non-stationarity of land cover classification accuracy has been ongoing for over two decades. We extend the understanding of thematic map accuracy spatial patterns by: 1) quantifying spatial patterns of map-reference agreement for class-specific land cover c...
Automatic Generation of High Quality DSM Based on IRS-P5 Cartosat-1 Stereo Data
NASA Astrophysics Data System (ADS)
d'Angelo, Pablo; Uttenthaler, Andreas; Carl, Sebastian; Barner, Frithjof; Reinartz, Peter
2010-12-01
IRS-P5 Cartosat-1 high resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on IRS-P5 Cartosat-1 imagery is presented, with an emphasis on automated processing and product quality. The proposed system processes IRS-P5 level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The described method uses an RPC correction based on DSM alignment instead of using reference images with a lower lateral accuracy, this results in improved geolocation of the DSMs and orthoimages. Following RPC correction, highly detailed DSMs with 5 m grid spacing are derived using Semiglobal Matching. The proposed method is part of an operational Cartosat-1 processor for the generation of a high resolution DSM. Evaluation of 18 scenes against independent ground truth measurements indicates a mean lateral error (CE90) of 6.7 meters and a mean vertical accuracy (LE90) of 5.1 meters.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
Bompastore, Nicholas J; Cisu, Theodore; Holoch, Peter
2018-04-30
To characterize available information about Peyronie disease online and evaluate its readability, quality, accuracy, and respective associations with HONcode certification and website category. The search term "Peyronie disease" was queried on 3 major search engines (Google, Bing, and Yahoo) and the first 50 search results on each search engine were assessed. All websites were categorized as institutional or reference, commercial, charitable, personal or patient support, or alternative medicine, and cross-referenced with the Health on the Net (HON) Foundation. Websites that met the inclusion criteria were analyzed for readability using 3 validated algorithms, for quality using the DISCERN instrument, and for accuracy by a fellowship-trained urologist. On average, online health information about treatment of Peyronie disease is written at or above the 11th grade level, exceeding the current reading guidelines of 6th-8th grade. The mean total DISCERN score for all website categories was 50.44 (standard deviation [SD] 11.94), the upper range of "fair" quality. The mean accuracy score of all online Peyronie treatment information was 2.76 (SD 1.23), corresponding to only 25%-50% accurate information. Both institutional or reference and HONcode-certified websites were of "good" quality (53.44, SD 11.64 and 60.86, SD 8.74, respectively). Institutional or reference websites were 50%-75% accurate (3.13, SD 1.20). Most of the online Peyronie disease treatment information is of mediocre quality and accuracy. The information from institutional or reference websites is of better quality and accuracy, and the information from HONcode-certified websites is of better quality. The mean readability of all websites exceeds the reading ability of most US adults by several grade levels. Copyright © 2018 Elsevier Inc. All rights reserved.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Ahmad, Meraj; Sinha, Anubhav; Ghosh, Sreya; Kumar, Vikrant; Davila, Sonia; Yajnik, Chittaranjan S; Chandak, Giriraj R
2017-07-27
Imputation is a computational method based on the principle of haplotype sharing allowing enrichment of genome-wide association study datasets. It depends on the haplotype structure of the population and density of the genotype data. The 1000 Genomes Project led to the generation of imputation reference panels which have been used globally. However, recent studies have shown that population-specific panels provide better enrichment of genome-wide variants. We compared the imputation accuracy using 1000 Genomes phase 3 reference panel and a panel generated from genome-wide data on 407 individuals from Western India (WIP). The concordance of imputed variants was cross-checked with next-generation re-sequencing data on a subset of genomic regions. Further, using the genome-wide data from 1880 individuals, we demonstrate that WIP works better than the 1000 Genomes phase 3 panel and when merged with it, significantly improves the imputation accuracy throughout the minor allele frequency range. We also show that imputation using only South Asian component of the 1000 Genomes phase 3 panel works as good as the merged panel, making it computationally less intensive job. Thus, our study stresses that imputation accuracy using 1000 Genomes phase 3 panel can be further improved by including population-specific reference panels from South Asia.
Kutz, Alexander; Hausfater, Pierre; Oppert, Michael; Alan, Murat; Grolimund, Eva; Gast, Claire; Alonso, Christine; Wissmann, Christoph; Kuehn, Christian; Bernard, Maguy; Huber, Andreas; Mueller, Beat; Schuetz, Philipp
2016-04-01
Procalcitonin (PCT) is increasingly being used for the diagnostic and prognostic work up of patients with suspected infections in the emergency department (ED). Recently, B·R·A·H·M·S PCT direct, the first high sensitive point-of-care test (POCT), has been developed for fast PCT measurement on capillary or venous blood samples. This is a prospective, international comparison study conducted in three European EDs. Consecutive patients with suspicion of bacterial infection were included. Duplicate determination of PCT was performed in capillary (fingertip) and venous whole blood (EDTA), and compared to the reference method. The diagnostic accuracy was evaluated by correlation and concordance analyses. Three hundred and three patients were included over a 6-month period (60.4% male, median age 65.2 years). The correlation between capillary or venous whole blood and the reference method was excellent: r2=0.96 and 0.97, sensitivity 88.1% and 93.0%, specificity 96.5% and 96.8%, concordance 93% and 95%, respectively at a 0.25 μg/L threshold. No significant bias was observed (-0.04 and -0.02 for capillary and venous whole blood) although there were 6.8% and 5.1% outliers, respectively. B·R·A·H·M·S PCT direct had a shorter time to result as compared to the reference method (25 vs. 144 min, difference 119 min, 95% CI 110-134 min, p<0.0001). This study found a high diagnostic accuracy and a faster time to result of B·R·A·H·M·S PCT direct in the ED setting, allowing shortening time to therapy and a more wide-spread use of PCT.
Calibration of GPS based high accuracy speed meter for vehicles
NASA Astrophysics Data System (ADS)
Bai, Yin; Sun, Qiao; Du, Lei; Yu, Mei; Bai, Jie
2015-02-01
GPS based high accuracy speed meter for vehicles is a special type of GPS speed meter which uses Doppler Demodulation of GPS signals to calculate the speed of a moving target. It is increasingly used as reference equipment in the field of traffic speed measurement, but acknowledged standard calibration methods are still lacking. To solve this problem, this paper presents the set-ups of simulated calibration, field test signal replay calibration, and in-field test comparison with an optical sensor based non-contact speed meter. All the experiments were carried out on particular speed values in the range of (40-180) km/h with the same GPS speed meter. The speed measurement errors of simulated calibration fall in the range of +/-0.1 km/h or +/-0.1%, with uncertainties smaller than 0.02% (k=2). The errors of replay calibration fall in the range of +/-0.1% with uncertainties smaller than 0.10% (k=2). The calibration results justify the effectiveness of the two methods. The relative deviations of the GPS speed meter from the optical sensor based noncontact speed meter fall in the range of +/-0.3%, which validates the use of GPS speed meter as reference instruments. The results of this research can provide technical basis for the establishment of internationally standard calibration methods of GPS speed meters, and thus ensures the legal status of GPS speed meters as reference equipment in the field of traffic speed metrology.
Cantiello, Francesco; Gangemi, Vincenzo; Cascini, Giuseppe Lucio; Calabria, Ferdinando; Moschini, Marco; Ferro, Matteo; Musi, Gennaro; Butticè, Salvatore; Salonia, Andrea; Briganti, Alberto; Damiano, Rocco
2017-08-01
To assess the diagnostic accuracy of 64 Copper prostate-specific membrane antigen ( 64 Cu-PSMA) positron emission tomography/computed tomography (PET/CT) in the primary lymph node (LN) staging of a selected cohort of intermediate- to high-risk prostate cancer (PCa) patients. An observational prospective study was performed in 23 patients with intermediate- to high-risk PCa, who underwent 64 Cu-PSMA PET/CT for local and lymph nodal staging before laparoscopic radical prostatectomy with an extended pelvic LN dissection. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for LN status of 64 Cu-PSMA PET/CT were calculated using the final pathological findings as reference. Furthermore, we evaluated the correlation of intraprostatic tumor extent and grading with 64 Cu-PSMA intraprostatic distribution. Pathological analysis of LN involvement in 413 LNs harvested from our study cohort identified a total of 22 LN metastases in 8 (5%) of the 23 (35%) PCa patients. Imaging-based LN staging in a per-patient analysis showed that 64 Cu-PSMA PET/CT was positive in 7 of 8 LN-positive patients (22%) with a sensitivity of 87.5%, specificity of 100%, PPV of 100%, and NPV of 93.7%, considering the maximum standardized uptake value (SUV max ) at 4 hours as our reference. Receiver operating characteristic curve was characterized by an area under the curve of 0.938. A significant positive association was observed between SUV max at 4 hours with Gleason score, index, and cumulative tumor volume. In our intermediate- to high-risk PCa patients study cohort, we showed the high diagnostic accuracy of 64 Cu-PSMA PET/CT for primary LN staging before radical prostatectomy. Copyright © 2017 Elsevier Inc. All rights reserved.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-06-17
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.
Laser based water equilibration method for d18O determination of water samples
NASA Astrophysics Data System (ADS)
Mandic, Magda; Smajgl, Danijela; Stoebener, Nils
2017-04-01
Determination of d18O with water equilibration method using mass spectrometers equipped with equilibration unit or Gas Bench is known already for many years. Now, with development of laser spectrometers this extends methods and possibilities to apply different technologies in laboratory but also in the field. The Thermo Scientific™ Delta Ray™ Isotope Ratio Infrared Spectrometer (IRIS) analyzer with the Universal Reference Interface (URI) Connect and Teledyne Cetac ASX-7100 offers high precision and throughput of samples. It employs optical spectroscopy for continuous measurement of isotope ratio values and concentration of carbon dioxide in ambient air, and also for analysis of discrete samples from vials, syringes, bags, or other user-provided sample containers. Test measurements and conformation of precision and accuracy of method determination d18O in water samples were done in Thermo Fisher application laboratory with three lab standards, namely ANST, Ocean II and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values of the water. With method that we present in this work achieved repeatability and accuracy are 0.16‰ and 0.71‰, respectively, which fulfill requirements of regulatory method for wine and must after equilibration with CO2.
A structural SVM approach for reference parsing.
Zhang, Xiaoli; Zou, Jie; Le, Daniel X; Thoma, George R
2011-06-09
Automated extraction of bibliographic data, such as article titles, author names, abstracts, and references is essential to the affordable creation of large citation databases. References, typically appearing at the end of journal articles, can also provide valuable information for extracting other bibliographic data. Therefore, parsing individual reference to extract author, title, journal, year, etc. is sometimes a necessary preprocessing step in building citation-indexing systems. The regular structure in references enables us to consider reference parsing a sequence learning problem and to study structural Support Vector Machine (structural SVM), a newly developed structured learning algorithm on parsing references. In this study, we implemented structural SVM and used two types of contextual features to compare structural SVM with conventional SVM. Both methods achieve above 98% token classification accuracy and above 95% overall chunk-level accuracy for reference parsing. We also compared SVM and structural SVM to Conditional Random Field (CRF). The experimental results show that structural SVM and CRF achieve similar accuracies at token- and chunk-levels. When only basic observation features are used for each token, structural SVM achieves higher performance compared to SVM since it utilizes the contextual label features. However, when the contextual observation features from neighboring tokens are combined, SVM performance improves greatly, and is close to that of structural SVM after adding the second order contextual observation features. The comparison of these two methods with CRF using the same set of binary features show that both structural SVM and CRF perform better than SVM, indicating their stronger sequence learning ability in reference parsing.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
Improved imputation of low-frequency and rare variants using the UK10K haplotype reference panel.
Huang, Jie; Howie, Bryan; McCarthy, Shane; Memari, Yasin; Walter, Klaudia; Min, Josine L; Danecek, Petr; Malerba, Giovanni; Trabetti, Elisabetta; Zheng, Hou-Feng; Gambaro, Giovanni; Richards, J Brent; Durbin, Richard; Timpson, Nicholas J; Marchini, Jonathan; Soranzo, Nicole
2015-09-14
Imputing genotypes from reference panels created by whole-genome sequencing (WGS) provides a cost-effective strategy for augmenting the single-nucleotide polymorphism (SNP) content of genome-wide arrays. The UK10K Cohorts project has generated a data set of 3,781 whole genomes sequenced at low depth (average 7x), aiming to exhaustively characterize genetic variation down to 0.1% minor allele frequency in the British population. Here we demonstrate the value of this resource for improving imputation accuracy at rare and low-frequency variants in both a UK and an Italian population. We show that large increases in imputation accuracy can be achieved by re-phasing WGS reference panels after initial genotype calling. We also present a method for combining WGS panels to improve variant coverage and downstream imputation accuracy, which we illustrate by integrating 7,562 WGS haplotypes from the UK10K project with 2,184 haplotypes from the 1000 Genomes Project. Finally, we introduce a novel approximation that maintains speed without sacrificing imputation accuracy for rare variants.
The accuracy of prehospital diagnosis of acute cerebrovascular accidents: an observational study.
Karliński, Michał; Gluszkiewicz, Marcin; Członkowska, Anna
2015-06-19
Time to treatment is the key factor in stroke care. Although the initial medical assessment is usually made by a non-neurologist or a paramedic, it should ensure correct identification of all acute cerebrovascular accidents (CVAs). Our aim was to evaluate the accuracy of the physician-made prehospital diagnosis of acute CVA in patients referred directly to the neurological emergency department (ED), and to identify conditions mimicking CVAs. This observational study included consecutive patients referred to our neurological ED by emergency physicians with a suspicion of CVA (acute stroke, transient ischemic attack (TIA) or a syndrome-based diagnosis) during 12 months. Referrals were considered correct if the prehospital diagnosis of CVA proved to be stroke or TIA. The prehospital diagnosis of CVA was correct in 360 of 570 cases. Its positive predictive value ranged from 100% for the syndrome-based diagnosis, through 70% for stroke, to 34% for TIA. Misdiagnoses were less frequent among ambulance physicians compared to primary care and outpatient physicians (33% vs. 52%, p < 0.001). The most frequent mimics were vertigo (19%), electrolyte and metabolic disturbances (12%), seizures (11%), cardiovascular disorders (10%), blood hypertension (8%) and brain tumors (5%). Additionally, 6% of all admitted CVA cases were referred with prehospital diagnoses other than CVA. Emergency physicians appear to be sensitive in diagnosing CVAs but their overall accuracy does not seem high. They tend to overuse the diagnosis of TIA. Constant education and adoption of stroke screening scales may be beneficial for emergency care systems based both on physicians and on paramedics.
The accuracy of prehospital diagnosis of acute cerebrovascular accidents: an observational study
Gluszkiewicz, Marcin; Członkowska, Anna
2015-01-01
Introduction Time to treatment is the key factor in stroke care. Although the initial medical assessment is usually made by a non-neurologist or a paramedic, it should ensure correct identification of all acute cerebrovascular accidents (CVAs). Our aim was to evaluate the accuracy of the physician-made prehospital diagnosis of acute CVA in patients referred directly to the neurological emergency department (ED), and to identify conditions mimicking CVAs. Material and methods This observational study included consecutive patients referred to our neurological ED by emergency physicians with a suspicion of CVA (acute stroke, transient ischemic attack (TIA) or a syndrome-based diagnosis) during 12 months. Referrals were considered correct if the prehospital diagnosis of CVA proved to be stroke or TIA. Results The prehospital diagnosis of CVA was correct in 360 of 570 cases. Its positive predictive value ranged from 100% for the syndrome-based diagnosis, through 70% for stroke, to 34% for TIA. Misdiagnoses were less frequent among ambulance physicians compared to primary care and outpatient physicians (33% vs. 52%, p < 0.001). The most frequent mimics were vertigo (19%), electrolyte and metabolic disturbances (12%), seizures (11%), cardiovascular disorders (10%), blood hypertension (8%) and brain tumors (5%). Additionally, 6% of all admitted CVA cases were referred with prehospital diagnoses other than CVA. Conclusions Emergency physicians appear to be sensitive in diagnosing CVAs but their overall accuracy does not seem high. They tend to overuse the diagnosis of TIA. Constant education and adoption of stroke screening scales may be beneficial for emergency care systems based both on physicians and on paramedics. PMID:26170845
Optimization of GPS water vapor tomography technique with radiosonde and COSMIC historical data
NASA Astrophysics Data System (ADS)
Ye, Shirong; Xia, Pengfei; Cai, Changsheng
2016-09-01
The near-real-time high spatial resolution of atmospheric water vapor distribution is vital in numerical weather prediction. GPS tomography technique has been proved effectively for three-dimensional water vapor reconstruction. In this study, the tomography processing is optimized in a few aspects by the aid of radiosonde and COSMIC historical data. Firstly, regional tropospheric zenith hydrostatic delay (ZHD) models are improved and thus the zenith wet delay (ZWD) can be obtained at a higher accuracy. Secondly, the regional conversion factor of converting the ZWD to the precipitable water vapor (PWV) is refined. Next, we develop a new method for dividing the tomography grid with an uneven voxel height and a varied water vapor layer top. Finally, we propose a Gaussian exponential vertical interpolation method which can better reflect the vertical variation characteristic of water vapor. GPS datasets collected in Hong Kong in February 2014 are employed to evaluate the optimized tomographic method by contrast with the conventional method. The radiosonde-derived and COSMIC-derived water vapor densities are utilized as references to evaluate the tomographic results. Using radiosonde products as references, the test results obtained from our optimized method indicate that the water vapor density accuracy is improved by 15 and 12 % compared to those derived from the conventional method below the height of 3.75 km and above the height of 3.75 km, respectively. Using the COSMIC products as references, the results indicate that the water vapor density accuracy is improved by 15 and 19 % below 3.75 km and above 3.75 km, respectively.
Haufe, William M; Wolfson, Tanya; Hooker, Catherine A; Hooker, Jonathan C; Covarrubias, Yesenia; Schlein, Alex N; Hamilton, Gavin; Middleton, Michael S; Angeles, Jorge E; Hernando, Diego; Reeder, Scott B; Schwimmer, Jeffrey B; Sirlin, Claude B
2017-12-01
To assess and compare the accuracy of magnitude-based magnetic resonance imaging (MRI-M) and complex-based MRI (MRI-C) for estimating hepatic proton density fat fraction (PDFF) in children, using MR spectroscopy (MRS) as the reference standard. A secondary aim was to assess the agreement between MRI-M and MRI-C. This was a HIPAA-compliant, retrospective analysis of data collected in children enrolled in prospective, Institutional Review Board (IRB)-approved studies between 2012 and 2014. Informed consent was obtained from 200 children (ages 8-19 years) who subsequently underwent 3T MR exams that included MRI-M, MRI-C, and T 1 -independent, T 2 -corrected, single-voxel stimulated echo acquisition mode (STEAM) MRS. Both MRI methods acquired six echoes at low flip angles. T2*-corrected PDFF parametric maps were generated. PDFF values were recorded from regions of interest (ROIs) drawn on the maps in each of the nine Couinaud segments and three ROIs colocalized to the MRS voxel location. Regression analyses assessing agreement with MRS were performed to evaluate the accuracy of each MRI method, and Bland-Altman and intraclass correlation coefficient (ICC) analyses were performed to assess agreement between the MRI methods. MRI-M and MRI-C PDFF were accurate relative to the colocalized MRS reference standard, with regression intercepts of 0.63% and -0.07%, slopes of 0.998 and 0.975, and proportion-of-explained-variance values (R 2 ) of 0.982 and 0.979, respectively. For individual Couinaud segments and for the whole liver averages, Bland-Altman biases between MRI-M and MRI-C were small (ranging from 0.04 to 1.11%) and ICCs were high (≥0.978). Both MRI-M and MRI-C accurately estimated hepatic PDFF in children, and high intermethod agreement was observed. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1641-1647. © 2017 International Society for Magnetic Resonance in Medicine.
Accuracy evaluation of 3D lidar data from small UAV
NASA Astrophysics Data System (ADS)
Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav
2015-10-01
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
Variance approximations for assessments of classification accuracy
R. L. Czaplewski
1994-01-01
Variance approximations are derived for the weighted and unweighted kappa statistics, the conditional kappa statistic, and conditional probabilities. These statistics are useful to assess classification accuracy, such as accuracy of remotely sensed classifications in thematic maps when compared to a sample of reference classifications made in the field. Published...
Zand, Kevin A.; Shah, Amol; Heba, Elhamy; Wolfson, Tanya; Hamilton, Gavin; Lam, Jessica; Chen, Joshua; Hooker, Jonathan C.; Gamst, Anthony C.; Middleton, Michael S.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2015-01-01
Purpose To assess accuracy of magnitude-based magnetic resonance imaging (M-MRI) in children to estimate hepatic proton density fat fraction (PDFF) using two to six echoes, with magnetic resonance spectroscopy (MRS)-measured PDFF as a reference standard. Materials and Methods This was an IRB-approved, HIPAA-compliant, single-center, cross-sectional, retrospective analysis of data collected prospectively between 2008 and 2013 in children with known or suspected non-alcoholic fatty liver disease (NAFLD). Two hundred and eighty-six children (8 – 20 [mean 14.2 ± 2.5] yrs; 182 boys) underwent same-day MRS and M-MRI. Unenhanced two-dimensional axial spoiled gradient-recalled-echo images at six echo times were obtained at 3T after a single low-flip-angle (10°) excitation with ≥ 120-ms recovery time. Hepatic PDFF was estimated using the first two, three, four, five, and all six echoes. For each number of echoes, accuracy of M-MRI to estimate PDFF was assessed by linear regression with MRS-PDFF as reference standard. Accuracy metrics were regression intercept, slope, average bias, and R2. Results MRS-PDFF ranged from 0.2 – 40.4% (mean 13.1 ± 9.8%). Using three to six echoes, regression intercept, slope, and average bias were 0.46 – 0.96%, 0.99 – 1.01, and 0.57 – 0.89%, respectively. Using two echoes, these values were 2.98%, 0.97, and 2.72%, respectively. R2 ranged 0.98 – 0.99 for all methods. Conclusion Using three to six echoes, M-MRI has high accuracy for hepatic PDFF estimation in children. PMID:25847512
Zand, Kevin A; Shah, Amol; Heba, Elhamy; Wolfson, Tanya; Hamilton, Gavin; Lam, Jessica; Chen, Joshua; Hooker, Jonathan C; Gamst, Anthony C; Middleton, Michael S; Schwimmer, Jeffrey B; Sirlin, Claude B
2015-11-01
To assess accuracy of magnitude-based magnetic resonance imaging (M-MRI) in children to estimate hepatic proton density fat fraction (PDFF) using two to six echoes, with magnetic resonance spectroscopy (MRS) -measured PDFF as a reference standard. This was an IRB-approved, HIPAA-compliant, single-center, cross-sectional, retrospective analysis of data collected prospectively between 2008 and 2013 in children with known or suspected nonalcoholic fatty liver disease (NAFLD). Two hundred eighty-six children (8-20 [mean 14.2 ± 2.5] years; 182 boys) underwent same-day MRS and M-MRI. Unenhanced two-dimensional axial spoiled gradient-recalled-echo images at six echo times were obtained at 3T after a single low-flip-angle (10°) excitation with ≥ 120-ms recovery time. Hepatic PDFF was estimated using the first two, three, four, five, and all six echoes. For each number of echoes, accuracy of M-MRI to estimate PDFF was assessed by linear regression with MRS-PDFF as reference standard. Accuracy metrics were regression intercept, slope, average bias, and R(2) . MRS-PDFF ranged from 0.2-40.4% (mean 13.1 ± 9.8%). Using three to six echoes, regression intercept, slope, and average bias were 0.46-0.96%, 0.99-1.01, and 0.57-0.89%, respectively. Using two echoes, these values were 2.98%, 0.97, and 2.72%, respectively. R(2) ranged 0.98-0.99 for all methods. Using three to six echoes, M-MRI has high accuracy for hepatic PDFF estimation in children. © 2015 Wiley Periodicals, Inc.
Timsit, E; Dendukuri, N; Schiller, I; Buczinski, S
2016-12-01
Diagnosis of bovine respiratory disease (BRD) in beef cattle placed in feedlots is typically based on clinical illness (CI) detected by pen-checkers. Unfortunately, the accuracy of this diagnostic approach (namely, sensitivity [Se] and specificity [Sp]) remains poorly understood, in part due to the absence of a reference test for ante-mortem diagnosis of BRD. Our objective was to pool available estimates of CI's diagnostic accuracy for BRD diagnosis in feedlot beef cattle while adjusting for the inaccuracy in the reference test. The presence of lung lesions (LU) at slaughter was used as the reference test. A systematic review of the literature was conducted to identify research articles comparing CI detected by pen-checkers during the feeding period to LU at slaughter. A hierarchical Bayesian latent-class meta-analysis was used to model test accuracy. This approach accounted for imperfections of both tests as well as the within and between study variability in the accuracy of CI. Furthermore, it also predicted the Se CI and Sp CI for future studies. Conditional independence between CI and LU was assumed, as these two tests are not based on similar biological principles. Seven studies were included in the meta-analysis. Estimated pooled Se CI and Sp CI were 0.27 (95% Bayesian credible interval: 0.12-0.65) and 0.92 (0.72-0.98), respectively, whereas estimated pooled Se LU and Sp LU were 0.91 (0.82-0.99) and 0.67 (0.64-0.79). Predicted Se CI and Sp CI for future studies were 0.27 (0.01-0.96) and 0.92 (0.14-1.00), respectively. The wide credible intervals around predicted Se CI and Sp CI estimates indicated considerable heterogeneity among studies, which suggests that pooled Se CI and Sp CI are not generalizable to individual studies. In conclusion, CI appeared to have poor Se but high Sp for BRD diagnosis in feedlots. Furthermore, considerable heterogeneity among studies highlighted an urgent need to standardize BRD diagnosis in feedlots. Copyright © 2016 Elsevier B.V. All rights reserved.
Calhoun, Peter; Lum, John; Beck, Roy W; Kollman, Craig
2013-09-01
Knowledge of the accuracy of continuous glucose monitoring (CGM) devices is important for its use as a management tool for individuals with diabetes and for its use to assess outcomes in clinical studies. Using data from several inpatient studies, we compared the accuracy of two sensors, the Medtronic Enlite™ using MiniMed Paradigm(®) Veo™ calibration and the Sof-Sensor(®) glucose sensor using Guardian(®) REAL-Time CGM calibration (all from Medtronic Diabetes, Northridge, CA). Nocturnal data were analyzed from eight inpatient studies in which both CGM and reference glucose measurements were available. The analyses included 1,666 CGM-reference paired glucose values for the Enlite in 54 participants over 69 nights and 3,627 paired values for the Sof-Sensor in 66 participants over 91 nights. The Enlite sensor tended to report glucose levels lower than the reference over the entire range of glucose values, whereas the Sof-Sensor values tended to be higher than reference values in the hypoglycemic range and lower than reference values in the hyperglycemic range. The overall median sensor-reference difference was -15 mg/dL for the Enlite and -1 mg/dL for the Sof-Sensor (P<0.001). The median relative absolute difference was 15% for the Enlite versus 12% for the Sof-Sensor (P=0.06); 66% of Enlite values and 73% of Sof-Sensor values met International Organization for Standardization criteria. We found that the Enlite tended to be biased low over the entire glucose range, whereas the Sof-Sensor showed the more typical sensor pattern of being biased high in the hypoglycemic range and biased low in the hyperglycemic range.
Diaz, Naryttza N; Krause, Lutz; Goesmann, Alexander; Niehaus, Karsten; Nattkemper, Tim W
2009-01-01
Background Metagenomics, or the sequencing and analysis of collective genomes (metagenomes) of microorganisms isolated from an environment, promises direct access to the "unculturable majority". This emerging field offers the potential to lay solid basis on our understanding of the entire living world. However, the taxonomic classification is an essential task in the analysis of metagenomics data sets that it is still far from being solved. We present a novel strategy to predict the taxonomic origin of environmental genomic fragments. The proposed classifier combines the idea of the k-nearest neighbor with strategies from kernel-based learning. Results Our novel strategy was extensively evaluated using the leave-one-out cross validation strategy on fragments of variable length (800 bp – 50 Kbp) from 373 completely sequenced genomes. TACOA is able to classify genomic fragments of length 800 bp and 1 Kbp with high accuracy until rank class. For longer fragments ≥ 3 Kbp accurate predictions are made at even deeper taxonomic ranks (order and genus). Remarkably, TACOA also produces reliable results when the taxonomic origin of a fragment is not represented in the reference set, thus classifying such fragments to its known broader taxonomic class or simply as "unknown". We compared the classification accuracy of TACOA with the latest intrinsic classifier PhyloPythia using 63 recently published complete genomes. For fragments of length 800 bp and 1 Kbp the overall accuracy of TACOA is higher than that obtained by PhyloPythia at all taxonomic ranks. For all fragment lengths, both methods achieved comparable high specificity results up to rank class and low false negative rates are also obtained. Conclusion An accurate multi-class taxonomic classifier was developed for environmental genomic fragments. TACOA can predict with high reliability the taxonomic origin of genomic fragments as short as 800 bp. The proposed method is transparent, fast, accurate and the reference set can be easily updated as newly sequenced genomes become available. Moreover, the method demonstrated to be competitive when compared to the most current classifier PhyloPythia and has the advantage that it can be locally installed and the reference set can be kept up-to-date. PMID:19210774
Liu, Shu-Yu; Hu, Chang-Qin
2007-10-17
This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.
Sommargren, Gary E.
1999-01-01
An interferometer which has the capability of measuring optical elements and systems with an accuracy of .lambda./1000 where .lambda. is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about .lambda./50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. Whereas current interferometers illuminate the optic to be tested with an aberrated wavefront which also limits the accuracy of the measurement, this interferometer uses an essentially perfect spherical measurement wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms.
Sommargren, G.E.
1999-08-03
An interferometer is disclosed which has the capability of measuring optical elements and systems with an accuracy of {lambda}/1000 where {lambda} is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about {lambda}/50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. Whereas current interferometers illuminate the optic to be tested with an aberrated wavefront which also limits the accuracy of the measurement, this interferometer uses an essentially perfect spherical measurement wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms. 11 figs.
Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.
Mauch, Matthias; Frieler, Klaus; Dixon, Simon
2014-07-01
This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.
Improvement in Rayleigh Scattering Measurement Accuracy
NASA Technical Reports Server (NTRS)
Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.
2012-01-01
Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.
Hillebrand, Jennifer; Olszewski, Deborah; Sedefov, Roumen
2010-02-01
This article describes the findings of a descriptive analysis of 27 online drug retailers selling legal alternatives to illegal drugs, commonly referred to as "herbal highs" and "legal highs" in 2008 . The study attempted to quantify the online availability of drug retailers, to describe common products and characteristics in EU-based retail sales. The findings highlight the concern about the lack of objective information about products offered, including potential risks to health. Systems should be developed to assess the contents of products and the accuracy of information provided on the Internet, alongside continued monitoring of this market for "legal high" substances.
NASA Astrophysics Data System (ADS)
Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2014-11-01
Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.
Angelides, Kimon; Matsunami, Risë K.; Engler, David A.
2015-01-01
Background: We evaluated the accuracy, precision, and linearity of the In Touch® blood glucose monitoring system (BGMS), a new color touch screen and cellular-enabled blood glucose meter, using a new rapid, highly precise and accurate 13C6 isotope-dilution liquid chromatography-mass spectrometry method (IDLC-MS). Methods: Blood glucose measurements from the In Touch® BGMS were referenced to a validated UPLC-MRM standard reference measurement procedure previously shown to be highly accurate and precise. Readings from the In Touch® BGMS were taken over the blood glucose range of 24-640 mg/dL using 12 concentrations of blood glucose. Ten In Touch® BGMS and 3 lots of test strips were used with 10 replicates at each concentration. A lay user study was also performed to assess the ease of use. Results: At blood glucose concentrations <75 mg/dL 100% of the measurements are within ±8 mg/dL from the true reference standard; at blood glucose levels >75 mg/dL 100% of the measurements are within ±15% of the true reference standard. 100% of the results are within category A of the consensus grid. Within-run precision show CV < 3.72% between 24-50 mg/dL and CV<2.22% between 500 and 600 mg/dL. The results show that the In Touch® meter exceeds the minimum criteria of both the ISO 15197:2003 and ISO 15197:2013 standards. The results from a user panel show that 100% of the respondents reported that the color touch screen, with its graphic user interface (GUI), is well labeled and easy to navigate. Conclusions: To our knowledge this is the first touch screen glucose meter and the first study where accuracy of a new BGMS has been measured against a true primary reference standard, namely IDLC-MS. PMID:26002836
Metal Standards for Waveguide Characterization of Materials
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Kory, Carol L.
2009-01-01
Rectangular-waveguide inserts that are made of non-ferromagnetic metals and are sized and shaped to function as notch filters have been conceived as reference standards for use in the rectangular- waveguide method of characterizing materials with respect to such constitutive electromagnetic properties as permittivity and permeability. Such standards are needed for determining the accuracy of measurements used in the method, as described below. In this method, a specimen of a material to be characterized is cut to a prescribed size and shape and inserted in a rectangular- waveguide test fixture, wherein the specimen is irradiated with a known source signal and detectors are used to measure the signals reflected by, and transmitted through, the specimen. Scattering parameters [also known as "S" parameters (S11, S12, S21, and S22)] are computed from ratios between the transmitted and reflected signals and the source signal. Then the permeability and permittivity of the specimen material are derived from the scattering parameters. Theoretically, the technique for calculating the permeability and permittivity from the scattering parameters is exact, but the accuracy of the results depends on the accuracy of the measurements from which the scattering parameters are obtained. To determine whether the measurements are accurate, it is necessary to perform comparable measurements on reference standards, which are essentially specimens that have known scattering parameters. To be most useful, reference standards should provide the full range of scattering-parameter values that can be obtained from material specimens. Specifically, measurements of the backscattering parameter (S11) from no reflection to total reflection and of the forward-transmission parameter (S21) from no transmission to total transmission are needed. A reference standard that functions as a notch (band-stop) filter can satisfy this need because as the signal frequency is varied across the frequency range for which the filter is designed, the scattering parameters vary over the ranges of values between the extremes of total reflection and total transmission. A notch-filter reference standard in the form of a rectangular-waveguide insert that has a size and shape similar to that of a material specimen is advantageous because the measurement configuration used for the reference standard can be the same as that for a material specimen. Typically a specimen is a block of material that fills a waveguide cross-section but occupies only a small fraction of the length of the waveguide. A reference standard of the present type (see figure) is a metal block that fills part of a waveguide cross section and contains a slot, the long dimension of which can be chosen to tailor the notch frequency to a desired value. The scattering parameters and notch frequency can be estimated with high accuracy by use of commercially available electromagnetic-field-simulating software. The block can be fabricated to the requisite precision by wire electrical-discharge machining. In use, the accuracy of measurements is determined by comparison of (1) the scattering parameters calculated from the measurements with (2) the scattering parameters calculated by the aforementioned software.
Accurate label-free 3-part leukocyte recognition with single cell lens-free imaging flow cytometry.
Li, Yuqian; Cornelis, Bruno; Dusa, Alexandra; Vanmeerbeeck, Geert; Vercruysse, Dries; Sohn, Erik; Blaszkiewicz, Kamil; Prodanov, Dimiter; Schelkens, Peter; Lagae, Liesbet
2018-05-01
Three-part white blood cell differentials which are key to routine blood workups are typically performed in centralized laboratories on conventional hematology analyzers operated by highly trained staff. With the trend of developing miniaturized blood analysis tool for point-of-need in order to accelerate turnaround times and move routine blood testing away from centralized facilities on the rise, our group has developed a highly miniaturized holographic imaging system for generating lens-free images of white blood cells in suspension. Analysis and classification of its output data, constitutes the final crucial step ensuring appropriate accuracy of the system. In this work, we implement reference holographic images of single white blood cells in suspension, in order to establish an accurate ground truth to increase classification accuracy. We also automate the entire workflow for analyzing the output and demonstrate clear improvement in the accuracy of the 3-part classification. High-dimensional optical and morphological features are extracted from reconstructed digital holograms of single cells using the ground-truth images and advanced machine learning algorithms are investigated and implemented to obtain 99% classification accuracy. Representative features of the three white blood cell subtypes are selected and give comparable results, with a focus on rapid cell recognition and decreased computational cost. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lee, Sang Cheol; Hong, Sung Kyung
2016-12-11
This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter.
Lee, Sang Cheol; Hong, Sung Kyung
2016-01-01
This paper presents an algorithm for velocity-aided attitude estimation for helicopter aircraft using a microelectromechanical system inertial-measurement unit. In general, high- performance gyroscopes are used for estimating the attitude of a helicopter, but this type of sensor is very expensive. When designing a cost-effective attitude system, attitude can be estimated by fusing a low cost accelerometer and a gyro, but the disadvantage of this method is its relatively low accuracy. The accelerometer output includes a component that occurs primarily as the aircraft turns, as well as the gravitational acceleration. When estimating attitude, the accelerometer measurement terms other than gravitational ones can be considered as disturbances. Therefore, errors increase in accordance with the flight dynamics. The proposed algorithm is designed for using velocity as an aid for high accuracy at low cost. It effectively eliminates the disturbances of accelerometer measurements using the airspeed. The algorithm was verified using helicopter experimental data. The algorithm performance was confirmed through a comparison with an attitude estimate obtained from an attitude heading reference system based on a high accuracy optic gyro, which was employed as core attitude equipment in the helicopter. PMID:27973429
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wei, Guo; Wang, Qi; Xiong, Zhenyu; Wang, Qun; Long, Xingwu
2016-10-01
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
Quantifying the effect of side branches in endothelial shear stress estimates
Giannopoulos, Andreas A.; Chatzizisis, Yiannis S.; Maurovich-Horvat, Pal; Antoniadis, Antonios P.; Hoffmann, Udo; Steigner, Michael L.; Rybicki, Frank J.; Mitsouras, Dimitrios
2016-01-01
Background and aims Low and high endothelial shear stress (ESS) is associated with coronary atherosclerosis progression and high-risk plaque features. Coronary ESS is currently assessed via computational fluid dynamic (CFD) simulation in the lumen geometry determined from invasive imaging such as intravascular ultrasound and optical coherence tomography. This process typically omits side branches of the target vessel in the CFD model as invasive imaging of those vessels is not clinically-indicated. The purpose of this study was to determine the extent to which this simplification affects the determination of those regions of the coronary endothelium subjected to pathologic ESS. Methods We determined the diagnostic accuracy of ESS profiling without side branches to detect pathologic ESS in the major coronary arteries of 5 hearts imaged ex vivo with CT angiography. ESS of the three major coronary arteries was calculated both without (test model), and with (reference model) inclusion of all side branches >1.5 mm in diameter, using previously-validated CFD approaches. Diagnostic test characteristics (accuracy, sensitivity, specificity and negative and positive predictive value [NPV/PPV]) with respect to the reference model were assessed for both the entire length as well as only the proximal portion of each major coronary artery, where the majority of high-risk plaques occur. Results Using the model without side branches overall accuracy, sensitivity, specificity, NPV and PPV were 83.4%, 54.0%, 96%, 95.9% and 55.1%, respectively to detect low ESS, and 87.0%, 67.7%, 90.7%, 93.7% and 57.5%, respectively to detect high ESS. When considering only the proximal arteries, test characteristics differed for low and high ESS, with low sensitivity (67.7%) and high specificity (90.7%) to detect low ESS, and low sensitivity (44.7%) and high specificity (95.5%) to detect high ESS. Conclusions The exclusion of side branches in ESS vascular profiling studies greatly reduces the ability to detect regions of the major coronary arteries subjected to pathologic ESS. Single-conduit models can in general only be used to rule out pathologic ESS. PMID:27372207
Quantifying the effect of side branches in endothelial shear stress estimates.
Giannopoulos, Andreas A; Chatzizisis, Yiannis S; Maurovich-Horvat, Pal; Antoniadis, Antonios P; Hoffmann, Udo; Steigner, Michael L; Rybicki, Frank J; Mitsouras, Dimitrios
2016-08-01
Low and high endothelial shear stress (ESS) is associated with coronary atherosclerosis progression and high-risk plaque features. Coronary ESS is currently assessed via computational fluid dynamic (CFD) simulation of coronary blood flow in the lumen geometry determined from invasive imaging such as intravascular ultrasound and optical coherence tomography. This process typically omits side branches of the target vessel in the CFD model as invasive imaging of those vessels is not usually clinically-indicated. The purpose of this study was to determine the extent to which this simplification affects the determination of those regions of the coronary endothelium subjected to pathologic ESS. We determined the diagnostic accuracy of ESS profiling without side branches to detect pathologic ESS in the major coronary arteries of 5 hearts imaged ex vivo with computed tomography angiography (CTA). ESS of the three major coronary arteries was calculated both without (test model), and with (reference model) inclusion of all side branches >1.5 mm in diameter, using previously-validated CFD approaches. Diagnostic test characteristics (accuracy, sensitivity, specificity and negative and positive predictive value [NPV/PPV]) with respect to the reference model were assessed for both the entire length as well as only the proximal portion of each major coronary artery, where the majority of high-risk plaques occur. Using the model without side branches overall accuracy, sensitivity, specificity, NPV and PPV were 83.4%, 54.0%, 96%, 95.9% and 55.1%, respectively to detect low ESS, and 87.0%, 67.7%, 90.7%, 93.7% and 57.5%, respectively to detect high ESS. When considering only the proximal arteries, test characteristics differed for low and high ESS, with low sensitivity (67.7%) and high specificity (90.7%) to detect low ESS, and low sensitivity (44.7%) and high specificity (95.5%) to detect high ESS. The exclusion of side branches in ESS vascular profiling studies greatly reduces the ability to detect regions of the major coronary arteries subjected to pathologic ESS. Single-conduit models can in general only be used to rule out pathologic ESS. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Release 6 reference sequence of the Drosophila melanogaster genome
Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.; ...
2015-01-14
Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less
The Release 6 reference sequence of the Drosophila melanogaster genome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoskins, Roger A.; Carlson, Joseph W.; Wan, Kenneth H.
Drosophila melanogaster plays an important role in molecular, genetic, and genomic studies of heredity, development, metabolism, behavior, and human disease. The initial reference genome sequence reported more than a decade ago had a profound impact on progress in Drosophila research, and improving the accuracy and completeness of this sequence continues to be important to further progress. We previously described improvement of the 117-Mb sequence in the euchromatic portion of the genome and 21 Mb in the heterochromatic portion, using a whole-genome shotgun assembly, BAC physical mapping, and clone-based finishing. Here, we report an improved reference sequence of the single-copy andmore » middle-repetitive regions of the genome, produced using cytogenetic mapping to mitotic and polytene chromosomes, clone-based finishing and BAC fingerprint verification, ordering of scaffolds by alignment to cDNA sequences, incorporation of other map and sequence data, and validation by whole-genome optical restriction mapping. These data substantially improve the accuracy and completeness of the reference sequence and the order and orientation of sequence scaffolds into chromosome arm assemblies. Representation of the Y chromosome and other heterochromatic regions is particularly improved. The new 143.9-Mb reference sequence, designated Release 6, effectively exhausts clone-based technologies for mapping and sequencing. Highly repeat-rich regions, including large satellite blocks and functional elements such as the ribosomal RNA genes and the centromeres, are largely inaccessible to current sequencing and assembly methods and remain poorly represented. In conclusion, further significant improvements will require sequencing technologies that do not depend on molecular cloning and that produce very long reads.« less
2011-01-01
Background When a specimen belongs to a species not yet represented in DNA barcode reference libraries there is disagreement over the effectiveness of using sequence comparisons to assign the query accurately to a higher taxon. Library completeness and the assignment criteria used have been proposed as critical factors affecting the accuracy of such assignments but have not been thoroughly investigated. We explored the accuracy of assignments to genus, tribe and subfamily in the Sphingidae, using the almost complete global DNA barcode reference library (1095 species) available for this family. Costa Rican sphingids (118 species), a well-documented, diverse subset of the family, with each of the tribes and subfamilies represented were used as queries. We simulated libraries with different levels of completeness (10-100% of the available species), and recorded assignments (positive or ambiguous) and their accuracy (true or false) under six criteria. Results A liberal tree-based criterion assigned 83% of queries accurately to genus, 74% to tribe and 90% to subfamily, compared to a strict tree-based criterion, which assigned 75% of queries accurately to genus, 66% to tribe and 84% to subfamily, with a library containing 100% of available species (but excluding the species of the query). The greater number of true positives delivered by more relaxed criteria was negatively balanced by the occurrence of more false positives. This effect was most sharply observed with libraries of the lowest completeness where, for example at the genus level, 32% of assignments were false positives with the liberal criterion versus < 1% when using the strict. We observed little difference (< 8% using the liberal criterion) however, in the overall accuracy of the assignments between the lowest and highest levels of library completeness at the tribe and subfamily level. Conclusions Our results suggest that when using a strict tree-based criterion for higher taxon assignment with DNA barcodes, the likelihood of assigning a query a genus name incorrectly is very low, if a genus name is provided it has a high likelihood of being accurate, and if no genus match is available the query can nevertheless be assigned to a subfamily with high accuracy regardless of library completeness. DNA barcoding often correctly assigned sphingid moths to higher taxa when species matches were unavailable, suggesting that barcode reference libraries can be useful for higher taxon assignments long before they achieve complete species coverage. PMID:21806794
Information retrieval from holographic interferograms: Fundamentals and problems
NASA Technical Reports Server (NTRS)
Vest, Charles M.
1987-01-01
Holographic interferograms can contain large amounts of information about flow and temperature fields. Their information content can be very high because they can be viewed from many different directions. This multidirectionality, and fringe localization add to the information contained in the fringe pattern if diffuse illumination is used. Additional information, and increased accuracy can be obtained through the use of dual reference wave holography to add reference fringes or to effect discrete phase shift or hetrodyne interferometry. Automated analysis of fringes is possible if interferograms are of simple structure and good quality. However, in practice a large number of practical problems can arise, so that a difficult image processing task results.
Measurement of optical to electrical and electrical to optical delays with ps-level uncertainty.
Peek, H Z; Pinkert, T J; Jansweijer, P P M; Koelemeij, J C J
2018-05-28
We present a new measurement principle to determine the absolute time delay of a waveform from an optical reference plane to an electrical reference plane and vice versa. We demonstrate a method based on this principle with 2 ps uncertainty. This method can be used to perform accurate time delay determinations of optical transceivers used in fiber-optic time-dissemination equipment. As a result the time scales in optical and electrical domain can be related to each other with the same uncertainty. We expect this method will be a new breakthrough in high-accuracy time transfer and absolute calibration of time-transfer equipment.
Video monitoring of oxygen saturation during controlled episodes of acute hypoxia.
Addison, Paul S; Foo, David M H; Jacquel, Dominique; Borg, Ulf
2016-08-01
A method for extracting video photoplethysmographic information from an RGB video stream is tested on data acquired during a porcine model of acute hypoxia. Cardiac pulsatile information was extracted from the acquired signals and processed to determine a continuously reported oxygen saturation (SvidO2). A high degree of correlation was found to exist between the video and a reference from a pulse oximeter. The calculated mean bias and accuracy across all eight desaturation episodes were -0.03% (range: -0.21% to 0.24%) and accuracy 4.90% (range: 3.80% to 6.19%) respectively. The results support the hypothesis that oxygen saturation trending can be evaluated accurately from a video system during acute hypoxia.
NASA Astrophysics Data System (ADS)
Mao, Heng; Wang, Xiao; Zhao, Dazun
2009-05-01
As a wavefront sensing (WFS) tool, Baseline algorithm, which is classified as the iterative-transform algorithm of phase retrieval, estimates the phase distribution at pupil from some known PSFs at defocus planes. By using multiple phase diversities and appropriate phase unwrapping methods, this algorithm can accomplish reliable unique solution and high dynamic phase measurement. In the paper, a Baseline algorithm based wavefront sensing experiment with modification of phase unwrapping has been implemented, and corresponding Graphical User Interfaces (GUI) software has also been given. The adaptability and repeatability of Baseline algorithm have been validated in experiments. Moreover, referring to the ZYGO interferometric results, the WFS accuracy of this algorithm has been exactly calibrated.
Habeck, C; Gazes, Y; Razlighi, Q; Steffener, J; Brickman, A; Barulli, D; Salthouse, T; Stern, Y
2016-01-15
Analyses of large test batteries administered to individuals ranging from young to old have consistently yielded a set of latent variables representing reference abilities (RAs) that capture the majority of the variance in age-related cognitive change: Episodic Memory, Fluid Reasoning, Perceptual Processing Speed, and Vocabulary. In a previous paper (Stern et al., 2014), we introduced the Reference Ability Neural Network Study, which administers 12 cognitive neuroimaging tasks (3 for each RA) to healthy adults age 20-80 in order to derive unique neural networks underlying these 4 RAs and investigate how these networks may be affected by aging. We used a multivariate approach, linear indicator regression, to derive a unique covariance pattern or Reference Ability Neural Network (RANN) for each of the 4 RAs. The RANNs were derived from the neural task data of 64 younger adults of age 30 and below. We then prospectively applied the RANNs to fMRI data from the remaining sample of 227 adults of age 31 and above in order to classify each subject-task map into one of the 4 possible reference domains. Overall classification accuracy across subjects in the sample age 31 and above was 0.80±0.18. Classification accuracy by RA domain was also good, but variable; memory: 0.72±0.32; reasoning: 0.75±0.35; speed: 0.79±0.31; vocabulary: 0.94±0.16. Classification accuracy was not associated with cross-sectional age, suggesting that these networks, and their specificity to the respective reference domain, might remain intact throughout the age range. Higher mean brain volume was correlated with increased overall classification accuracy; better overall performance on the tasks in the scanner was also associated with classification accuracy. For the RANN network scores, we observed for each RANN that a higher score was associated with a higher corresponding classification accuracy for that reference ability. Despite the absence of behavioral performance information in the derivation of these networks, we also observed some brain-behavioral correlations, notably for the fluid-reasoning network whose network score correlated with performance on the memory and fluid-reasoning tasks. While age did not influence the expression of this RANN, the slope of the association between network score and fluid-reasoning performance was negatively associated with higher ages. These results provide support for the hypothesis that a set of specific, age-invariant neural networks underlies these four RAs, and that these networks maintain their cognitive specificity and level of intensity across age. Activation common to all 12 tasks was identified as another activation pattern resulting from a mean-contrast Partial-Least-Squares technique. This common pattern did show associations with age and some subject demographics for some of the reference domains, lending support to the overall conclusion that aspects of neural processing that are specific to any cognitive reference ability stay constant across age, while aspects that are common to all reference abilities differ across age. Copyright © 2015 Elsevier Inc. All rights reserved.
High key rate continuous-variable quantum key distribution with a real local oscillator.
Wang, Tao; Huang, Peng; Zhou, Yingming; Liu, Weiqi; Ma, Hongxin; Wang, Shiyu; Zeng, Guihua
2018-02-05
Continuous-variable quantum key distribution (CVQKD) with a real local oscillator (LO) has been extensively studied recently due to its security and simplicity. In this paper, we propose a novel implementation of a high-key-rate CVQKD with a real LO. Particularly, with the help of the simultaneously generated reference pulse, the phase drift of the signal is tracked in real time and then compensated. By utilizing the time and polarization multiplexing techniques to isolate the reference pulse and controlling the intensity of it, not only the contamination from it is suppressed, but also a high accuracy of the phase compensation can be guaranteed. Besides, we employ homodyne detection on the signal to ensure the high quantum efficiency and heterodyne detection on the reference pulse to acquire the complete phase information of it. In order to suppress the excess noise, a theoretical noise model for our scheme is established. According to this model, the impact of the modulation variance and the intensity of the reference pulse are both analysed theoretically and then optimized according to the experimental data. By measuring the excess noise in the 25km optical fiber transmission system, a 3.14Mbps key rate in the asymptotic regime proves to be achievable. This work verifies the feasibility of the high-key-rate CVQKD with a real LO within the metropolitan area.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
The urine dipstick test useful to rule out infections. A meta-analysis of the accuracy
Devillé, Walter LJM; Yzermans, Joris C; van Duijn, Nico P; Bezemer, P Dick; van der Windt, Daniëlle AWM; Bouter, Lex M
2004-01-01
Background Many studies have evaluated the accuracy of dipstick tests as rapid detectors of bacteriuria and urinary tract infections (UTI). The lack of an adequate explanation for the heterogeneity of the dipstick accuracy stimulates an ongoing debate. The objective of the present meta-analysis was to summarise the available evidence on the diagnostic accuracy of the urine dipstick test, taking into account various pre-defined potential sources of heterogeneity. Methods Literature from 1990 through 1999 was searched in Medline and Embase, and by reference tracking. Selected publications should be concerned with the diagnosis of bacteriuria or urinary tract infections, investigate the use of dipstick tests for nitrites and/or leukocyte esterase, and present empirical data. A checklist was used to assess methodological quality. Results 70 publications were included. Accuracy of nitrites was high in pregnant women (Diagnostic Odds Ratio = 165) and elderly people (DOR = 108). Positive predictive values were ≥80% in elderly and in family medicine. Accuracy of leukocyte-esterase was high in studies in urology patients (DOR = 276). Sensitivities were highest in family medicine (86%). Negative predictive values were high in both tests in all patient groups and settings, except for in family medicine. The combination of both test results showed an important increase in sensitivity. Accuracy was high in studies in urology patients (DOR = 52), in children (DOR = 46), and if clinical information was present (DOR = 28). Sensitivity was highest in studies carried out in family medicine (90%). Predictive values of combinations of positive test results were low in all other situations. Conclusions Overall, this review demonstrates that the urine dipstick test alone seems to be useful in all populations to exclude the presence of infection if the results of both nitrites and leukocyte-esterase are negative. Sensitivities of the combination of both tests vary between 68 and 88% in different patient groups, but positive test results have to be confirmed. Although the combination of positive test results is very sensitive in family practice, the usefulness of the dipstick test alone to rule in infection remains doubtful, even with high pre-test probabilities. PMID:15175113
USDA-ARS?s Scientific Manuscript database
Small reference populations limit the accuracy of genomic prediction in numerically small breeds, such as the Danish Jersey. The objective of this study was to investigate two approaches to improve genomic prediction by increasing the size of the reference population for Danish Jerseys. The first ap...
Inventory and analysis of rangeland resources of the state land block on Parker Mountain, Utah
NASA Technical Reports Server (NTRS)
Jaynes, R. A. (Principal Investigator)
1983-01-01
High altitude color infrared (CIR) photography was interpreted to provide an 1:24,000 overlay to U.S.G.S. topographic maps. The inventory and analysis of rangeland resources was augmented by the digital analysis of LANDSAT MSS data. Available geology, soils, and precipitation maps were used to sort out areas of confusion on the CIR photography. The map overlay from photo interpretation was also prepared with reference to print maps developed from LANDSAT MSS data. The resulting map overlay has a high degree of interpretive and spatial accuracy. An unacceptable level of confusion between the several sagebrush types in the MSS mapping was largely corrected by introducing ancillary data. Boundaries from geology, soils, and precipitation maps, as well as field observations, were digitized and pixel classes were adjusted according to the location of pixels with particular spectral signatures with respect to such boundaries. The resulting map, with six major cover classes, has an overall accuracy of 89%. Overall accuracy was 74% when these six classes were expanded to 20 classes.
NASA Astrophysics Data System (ADS)
Zayed, M. A.; El-Rasheedy, El-Gazy A.
2012-03-01
Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used.
Klippenstein, Stephen J; Harding, Lawrence B; Ruscic, Branko
2017-09-07
The fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds to essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2 , CH 4 , H 2 O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zero-point energy, core-valence, relativistic, and diagonal Born-Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0-1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko
Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less
Klippenstein, Stephen J.; Harding, Lawrence B.; Ruscic, Branko
2017-07-31
Here, the fidelity of combustion simulations is strongly dependent on the accuracy of the underlying thermochemical properties for the core combustion species that arise as intermediates and products in the chemical conversion of most fuels. High level theoretical evaluations are coupled with a wide-ranging implementation of the Active Thermochemical Tables (ATcT) approach to obtain well-validated high fidelity predictions for the 0 K heat of formation for a large set of core combustion species. In particular, high level ab initio electronic structure based predictions are obtained for a set of 348 C, N, O, and H containing species, which corresponds tomore » essentially all core combustion species with 34 or fewer electrons. The theoretical analyses incorporate various high level corrections to base CCSD(T)/cc-pVnZ analyses (n = T or Q) using H 2, CH 4, H 2O, and NH 3 as references. Corrections for the complete-basis-set limit, higher-order excitations, anharmonic zeropoint energy, core–valence, relativistic, and diagonal Born–Oppenheimer effects are ordered in decreasing importance. Independent ATcT values are presented for a subset of 150 species. The accuracy of the theoretical predictions is explored through (i) examination of the magnitude of the various corrections, (ii) comparisons with other high level calculations, and (iii) through comparison with the ATcT values. The estimated 2σ uncertainties of the three methods devised here, ANL0, ANL0-F12, and ANL1, are in the range of ±1.0–1.5 kJ/mol for single-reference and moderately multireference species, for which the calculated higher order excitations are 5 kJ/mol or less. In addition to providing valuable references for combustion simulations, the subsequent inclusion of the current theoretical results into the ATcT thermochemical network is expected to significantly improve the thermochemical knowledge base for less-well studied species.« less
Phase shifting diffraction interferometer
Sommargren, Gary E.
1996-01-01
An interferometer which has the capability of measuring optical elements and systems with an accuracy of .lambda./1000 where .lambda. is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about .lambda./50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms.
Phase shifting diffraction interferometer
Sommargren, G.E.
1996-08-29
An interferometer which has the capability of measuring optical elements and systems with an accuracy of {lambda}/1000 where {lambda} is the wavelength of visible light. Whereas current interferometers employ a reference surface, which inherently limits the accuracy of the measurement to about {lambda}/50, this interferometer uses an essentially perfect spherical reference wavefront generated by the fundamental process of diffraction. This interferometer is adjustable to give unity fringe visibility, which maximizes the signal-to-noise, and has the means to introduce a controlled prescribed relative phase shift between the reference wavefront and the wavefront from the optics under test, which permits analysis of the interference fringe pattern using standard phase extraction algorithms. 8 figs.
Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice
2017-02-24
Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.
Gerke, Oke; Poulsen, Mads H; Høilund-Carlsen, Poul Flemming
2015-01-01
Diagnostic studies of accuracy targeting sensitivity and specificity are commonly done in a paired design in which all modalities are applied in each patient, whereas cost-effectiveness and cost-utility analyses are usually assessed either directly alongside to or indirectly by means of stochastic modeling based on larger randomized controlled trials (RCTs). However the conduct of RCTs is hampered in an environment such as ours, in which technology is rapidly evolving. As such, there is a relatively limited number of RCTs. Therefore, we investigated as to which extent paired diagnostic studies of accuracy can be also used to shed light on economic implications when considering a new diagnostic test. We propose a simple decision tree model-based cost-utility analysis of a diagnostic test when compared to the current standard procedure and exemplify this approach with published data from lymph node staging of prostate cancer. Average procedure costs were taken from the Danish Diagnosis Related Groups Tariff in 2013 and life expectancy was estimated for an ideal 60 year old patient based on prostate cancer stage and prostatectomy or radiation and chemotherapy. Quality-adjusted life-years (QALYs) were deduced from the literature, and an incremental cost-effectiveness ratio (ICER) was used to compare lymph node dissection with respective histopathological examination (reference standard) and (18)F-fluoromethylcholine positron emission tomography/computed tomography (FCH-PET/CT). Lower bounds of sensitivity and specificity of FCH-PET/CT were established at which the replacement of the reference standard by FCH-PET/CT comes with a trade-off between worse effectiveness and lower costs. Compared to the reference standard in a diagnostic accuracy study, any imperfections in accuracy of a diagnostic test imply that replacing the reference standard generates a loss in effectiveness and utility. We conclude that diagnostic studies of accuracy can be put to a more extensive use, over and above a mere indication of sensitivity and specificity of an imaging test, and that health economic considerations should be undertaken when planning a prospective diagnostic accuracy study. These endeavors will prove especially fruitful when comparing several imaging techniques with one another, or the same imaging technique using different tracers, with an independent reference standard for the evaluation of results.
Relative accuracy of the BD Logic and FreeStyle blood glucose meters.
2007-04-01
The BD Logic((R)) (Becton, Dickinson and Co., Franklin Lakes, NJ) and FreeStyle((R)) (Abbott Diabetes Care, Alameda, CA) meters are used to transmit data directly to insulin pumps for calculation of insulin doses and to calibrate continuous glucose sensors as well as to monitor blood glucose levels. The accuracy of the two meters was evaluated in two inpatient studies conducted by the Diabetes Research in Children Network (DirecNet). In both studies, meter glucose measurements made with either venous or capillary blood were compared with reference glucose measurements made by the DirecNet Central Laboratory at the University of Minnesota using a hexokinase enzymatic method. The BD Logic tended to read lower than the laboratory reference regardless of whether venous (median difference = -9 mg/dL) or capillary blood (median difference = -7 mg/dL) was used. This resulted in lower accuracy of the BD Logic compared with the FreeStyle meter based on the median relative absolute difference (RAD) for both venous blood (median RAD, 9% vs. 5%, P < 0.001) and capillary blood (median RAD, 11% vs. 6%, P = 0.008). The greatest discrepancy in the performance of the two meters was at higher reference glucose values. Accuracy was not significantly different when the reference was < or = 70 mg/dL. The BD Logic meter is less accurate than the FreeStyle meter.
NASA Astrophysics Data System (ADS)
Yao, C.; Zhang, Y.; Zhang, Y.; Liu, H.
2017-09-01
With the rapid development of Precision Agriculture (PA) promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN). For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.
Estimation of genomic breeding values for residual feed intake in a multibreed cattle population.
Khansefid, M; Pryce, J E; Bolormaa, S; Miller, S P; Wang, Z; Li, C; Goddard, M E
2014-08-01
Residual feed intake (RFI) is a measure of the efficiency of animals in feed utilization. The accuracies of GEBV for RFI could be improved by increasing the size of the reference population. Combining RFI records of different breeds is a way to do that. The aims of this study were to 1) develop a method for calculating GEBV in a multibreed population and 2) improve the accuracies of GEBV by using SNP associated with RFI. An alternative method for calculating accuracies of GEBV using genomic BLUP (GBLUP) equations is also described and compared to cross-validation tests. The dataset included RFI records and 606,096 SNP genotypes for 5,614 Bos taurus animals including 842 Holstein heifers and 2,009 Australian and 2,763 Canadian beef cattle. A range of models were tested for combining genotype and phenotype information from different breeds and the best model included an overall effect of each SNP, an effect of each SNP specific to a breed, and a small residual polygenic effect defined by the pedigree. In this model, the Holsteins and some Angus cattle were combined into 1 "breed class" because they were the only cattle measured for RFI at an early age (6-9 mo of age) and were fed a similar diet. The average empirical accuracy (0.31), estimated by calculating the correlation between GEBV and actual phenotypes divided by the square root of estimated heritability in 5-fold cross-validation tests, was near to that expected using the GBLUP equations (0.34). The average empirical and expected accuracies were 0.30 and 0.31, respectively, when the GEBV were estimated for each breed separately. Therefore, the across-breed reference population increased the accuracy of GEBV slightly, although the gain was greater for breeds with smaller number of individuals in the reference population (0.08 in Murray Grey and 0.11 in Hereford for empirical accuracy). In a second approach, SNP that were significantly (P < 0.001) associated with RFI in the beef cattle genomewide association studies were used to create an auxiliary genomic relationship matrix for estimating GEBV in Holstein heifers. The empirical (and expected) accuracy of GEBV within Holsteins increased from 0.33 (0.35) to 0.39 (0.36) and improved even more to 0.43 (0.50) when using a multibreed reference population. Therefore, a multibreed reference population is a useful resource to find SNP with a greater than average association with RFI in 1 breed and use them to estimate GEBV in another breed.
NASA Technical Reports Server (NTRS)
Huang, Dong; Yang, Wenze; Tan, Bin; Rautiainen, Miina; Zhang, Ping; Hu, Jiannan; Shabanov, Nikolay V.; Linder, Sune; Knyazikhin, Yuri; Myneni, Ranga B.
2006-01-01
The validation of moderate-resolution satellite leaf area index (LAI) products such as those operationally generated from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data requires reference LAI maps developed from field LAI measurements and fine-resolution satellite data. Errors in field measurements and satellite data determine the accuracy of the reference LAI maps. This paper describes a method by which reference maps of known accuracy can be generated with knowledge of errors in fine-resolution satellite data. The method is demonstrated with data from an international field campaign in a boreal coniferous forest in northern Sweden, and Enhanced Thematic Mapper Plus images. The reference LAI map thus generated is used to assess modifications to the MODIS LAI/fPAR algorithm recently implemented to derive the next generation of the MODIS LAI/fPAR product for this important biome type.
Physical examination tests for the diagnosis of femoroacetabular impingement. A systematic review.
Pacheco-Carrillo, Aitana; Medina-Porqueres, Ivan
2016-09-01
Numerous clinical tests have been proposed to diagnose FAI, but little is known about their diagnostic accuracy. To summarize and evaluate research on the accuracy of physical examination tests for diagnosis of FAI. A search of the PubMed, SPORTDiscus and CINAHL databases was performed. Studies were considered eligible if they compared the results of physical examination tests to those of a reference standard. Methodological quality and internal validity assessment was performed by two independent reviewers using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool. The systematic search strategy revealed 298 potential articles, five of which articles met the inclusion criteria. After assessment using the QUADAS score, four of the five articles were of high quality. Clinical tests included were Impingement sign, IROP test (Internal Rotation Over Pressure), FABER test (Flexion-Abduction-External Rotation), Stinchfield/RSRL (Resisted Straight Leg Raise) test, Scour test, Maximal squat test, and the Anterior Impingement test. IROP test, impingement sign, and FABER test showed the most sensitive values to identify FAI. The diagnostic accuracy of physical examination tests to assess FAI is limited due to its heterogenecity. There is a strong need for sound research of high methodological quality in this area. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mazurova, Elena; Mikhaylov, Aleksandr
2013-04-01
The selenocentric network of objects setting the coordinate system on the Moon, with the origin coinciding with the mass centre and axes directed along the inertia axes can become one of basic elements of the coordinate-time support for lunar navigation with use of cartographic materials and control objects. A powerful array of highly-precise and multiparameter information obtained by modern space vehicles allows one to establish Lunar Reference Frames (LRF) of an essentially another accuracy. Here, a special role is played by the results of scanning the lunar surface by the Lunar Reconnaissance Orbiter(LRO) American mission. The coordinates of points calculated only from the results of laser scanning have high enough accuracy of position definition with respect to each other, but it is possible to check up the real accuracy of spatial tie and improve the coordinates only by a network of points whose coordinates are computed both from laser scanning and other methods too, for example, by terrestrial laser location, space photogrammetry methods, and so on. The paper presents the algorithm for transforming selenocentric coordinate systems and the accuracy estimation of changing from one lunar coordinate system to another one. Keywords: selenocentric coordinate system, coordinate-time support.
Astrometry for New Reductions: The ANR method
NASA Astrophysics Data System (ADS)
Robert, Vincent; Le Poncin-Lafitte, Christophe
2018-04-01
Accurate positional measurements of planets and satellites are used to improve our knowledge of their orbits and dynamics, and to infer the accuracy of the planet and satellite ephemerides. With the arrival of the Gaia-DR1 reference star catalog and its complete release afterward, the methods for ground-based astrometry become outdated in terms of their formal accuracy compared to the catalog's which is used. Systematic and zonal errors of the reference stars are eliminated, and the astrometric process now dominates in the error budget. We present a set of algorithms for computing the apparent directions of planets, satellites and stars on any date to micro-arcsecond precision. The expressions are consistent with the ICRS reference system, and define the transformation between theoretical reference data, and ground-based astrometric observables.
Chaswal, Vibha; Weldon, Michael; Gupta, Nilendu; Chakravarti, Arnab; Rong, Yi
2014-07-08
We present commissioning and comprehensive evaluation for ArcCHECK as a QA equipment for volumetric-modulated arc therapy (VMAT), using the 6 MV photon beam with and without the flattening filter, and the SNC patient software (version 6.2). In addition to commissioning involving absolute dose calibration, array calibration, and PMMA density verification, ArcCHECK was evaluated for its response dependency on linac dose rate, instantaneous dose rate, radiation field size, beam angle, and couch insertion. Scatter dose characterization, consistency and symmetry of response, and dosimetry accuracy evaluation for fixed aperture arcs and clinical VMAT patient plans were also investigated. All the evaluation tests were performed with the central plug inserted and the homogeneous PMMA density value. Results of gamma analysis demonstrated an overall agreement between ArcCHECK-measured and TPS-calculated reference doses. The diode based field size dependency was found to be within 0.5% of the reference. The dose rate-based dependency was well within 1% of the TPS reference, and the angular dependency was found to be ± 3% of the reference, as tested for BEV angles, for both beams. Dosimetry of fixed arcs, using both narrow and wide field widths, resulted in clinically acceptable global gamma passing rates on the 3%/3mm level and 10% threshold. Dosimetry of narrow arcs showed an improvement over published literature. The clinical VMAT cases demonstrated high level of dosimetry accuracy in gamma passing rates.
Shelton, Joseph H; Santa Ana, Carol A; Thompson, Donald R; Emmett, Michael; Fordtran, John S
2007-01-01
Surreptitious ingestion of laxatives can lead to serious factitious diseases that are difficult to diagnose. Most cases involve ingestion of bisacodyl or senna. Thin layer chromatography (TLC) of urine or stool is the only commercially available test for these laxatives. Such testing is considered highly reliable, but its accuracy in clinical practice is unknown. Our aim was to evaluate the reliability of TLC laxative testing by a clinical reference laboratory in the United States. Diarrhea was induced in healthy volunteers by ingestion of bisacodyl, senna, or a control laxative (n = 11 for each laxative group). Samples of urine and diarrheal stool were sent in blinded fashion to the clinical reference laboratory for bisacodyl and senna analysis. TLC testing for bisacodyl-induced diarrhea revealed a sensitivity of 73% and specificity of 91% when urine was tested and sensitivity and specificity of 91% and 96%, respectively, when stool was analyzed. When diarrhea was induced by senna, the TLC assay for senna failed to identify even a single urine or stool specimen as positive (zero% sensitivity). Considering the expected prevalence of surreptitious laxative abuse in patients with chronic idiopathic diarrhea (2.4%-25%, depending on the clinical setting), TLC of urine or stool for bisacodyl by this reference laboratory would often produce misleading results, and testing for senna would have no clinical value. The major problems are false-positive tests for bisacodyl and false-negative tests for senna.
Comparison of tablet-based strategies for incision planning in laser microsurgery
NASA Astrophysics Data System (ADS)
Schoob, Andreas; Lekon, Stefan; Kundrat, Dennis; Kahrs, Lüder A.; Mattos, Leonardo S.; Ortmaier, Tobias
2015-03-01
Recent research has revealed that incision planning in laser surgery deploying stylus and tablet outperforms state-of-the-art micro-manipulator-based laser control. Providing more detailed quantitation regarding that approach, a comparative study of six tablet-based strategies for laser path planning is presented. Reference strategy is defined by monoscopic visualization and continuous path drawing on a graphics tablet. Further concepts deploying stereoscopic or a synthesized laser view, point-based path definition, real-time teleoperation or a pen display are compared with the reference scenario. Volunteers were asked to redraw and ablate stamped lines on a sample. Performance is assessed by measuring planning accuracy, completion time and ease of use. Results demonstrate that significant differences exist between proposed concepts. The reference strategy provides more accurate incision planning than the stereo or laser view scenario. Real-time teleoperation performs best with respect to completion time without indicating any significant deviation in accuracy and usability. Point-based planning as well as the pen display provide most accurate planning and increased ease of use compared to the reference strategy. As a result, combining the pen display approach with point-based planning has potential to become a powerful strategy because of benefiting from improved hand-eye-coordination on the one hand and from a simple but accurate technique for path definition on the other hand. These findings as well as the overall usability scale indicating high acceptance and consistence of proposed strategies motivate further advanced tablet-based planning in laser microsurgery.
Devonshire, Alison S; O'Sullivan, Denise M; Honeyborne, Isobella; Jones, Gerwyn; Karczmarczyk, Maria; Pavšič, Jernej; Gutteridge, Alice; Milavec, Mojca; Mendoza, Pablo; Schimmel, Heinz; Van Heuverswyn, Fran; Gorton, Rebecca; Cirillo, Daniela Maria; Borroni, Emanuele; Harris, Kathryn; Barnard, Marinus; Heydenrych, Anthenette; Ndusilo, Norah; Wallis, Carole L; Pillay, Keshree; Barry, Thomas; Reddington, Kate; Richter, Elvira; Mozioğlu, Erkan; Akyürek, Sema; Yalçınkaya, Burhanettin; Akgoz, Muslum; Žel, Jana; Foy, Carole A; McHugh, Timothy D; Huggett, Jim F
2016-08-03
Real-time PCR (qPCR) based methods, such as the Xpert MTB/RIF, are increasingly being used to diagnose tuberculosis (TB). While qualitative methods are adequate for diagnosis, the therapeutic monitoring of TB patients requires quantitative methods currently performed using smear microscopy. The potential use of quantitative molecular measurements for therapeutic monitoring has been investigated but findings have been variable and inconclusive. The lack of an adequate reference method and reference materials is a barrier to understanding the source of such disagreement. Digital PCR (dPCR) offers the potential for an accurate method for quantification of specific DNA sequences in reference materials which can be used to evaluate quantitative molecular methods for TB treatment monitoring. To assess a novel approach for the development of quality assurance materials we used dPCR to quantify specific DNA sequences in a range of prototype reference materials and evaluated accuracy between different laboratories and instruments. The materials were then also used to evaluate the quantitative performance of qPCR and Xpert MTB/RIF in eight clinical testing laboratories. dPCR was found to provide results in good agreement with the other methods tested and to be highly reproducible between laboratories without calibration even when using different instruments. When the reference materials were analysed with qPCR and Xpert MTB/RIF by clinical laboratories, all laboratories were able to correctly rank the reference materials according to concentration, however there was a marked difference in the measured magnitude. TB is a disease where the quantification of the pathogen could lead to better patient management and qPCR methods offer the potential to rapidly perform such analysis. However, our findings suggest that when precisely characterised materials are used to evaluate qPCR methods, the measurement result variation is too high to determine whether molecular quantification of Mycobacterium tuberculosis would provide a clinically useful readout. The methods described in this study provide a means by which the technical performance of quantitative molecular methods can be evaluated independently of clinical variability to improve accuracy of measurement results. These will assist in ultimately increasing the likelihood that such approaches could be used to improve patient management of TB.
Seismic Motion Stability, Measurement and Precision Control.
1979-12-01
tiltmeter . Tilt was corrected by changing air pressure in one bank of isolators to maintain the reference tiltmeter at null well within the 0.1 arcsecond...frequency rotations (0-0.1 Hz), a high quality, two-axis tiltmeter is used. The azimuth orientation angle could be measured with a four-position gyro...compassing system with considerably less accuracy than the tiltmeters . However, it would provide a continuous automatic azimuth determination update every
2009-07-28
further referred to as normative models of causation. A second type of model, which are based on Pavlovian classical conditioning , is associative... conditions of high cognitive load), the likelihood of the accuracy of the perception is compromised. If an inaccurate perception translates to an inaccurate...correlation and causation detection in specific military operations and under conditions of operational stress. Background Models of correlation
NASA Astrophysics Data System (ADS)
Jawak, Shridhar D.; Jadhav, Ajay; Luis, Alvarinho J.
2016-05-01
Supraglacial debris was mapped in the Schirmacher Oasis, east Antarctica, by using WorldView-2 (WV-2) high resolution optical remote sensing data consisting of 8-band calibrated Gram Schmidt (GS)-sharpened and atmospherically corrected WV-2 imagery. This study is a preliminary attempt to develop an object-oriented rule set to extract supraglacial debris for Antarctic region using 8-spectral band imagery. Supraglacial debris was manually digitized from the satellite imagery to generate the ground reference data. Several trials were performed using few existing traditional pixel-based classification techniques and color-texture based object-oriented classification methods to extract supraglacial debris over a small domain of the study area. Multi-level segmentation and attributes such as scale, shape, size, compactness along with spectral information from the data were used for developing the rule set. The quantitative analysis of error was carried out against the manually digitized reference data to test the practicability of our approach over the traditional pixel-based methods. Our results indicate that OBIA-based approach (overall accuracy: 93%) for extracting supraglacial debris performed better than all the traditional pixel-based methods (overall accuracy: 80-85%). The present attempt provides a comprehensive improved method for semiautomatic feature extraction in supraglacial environment and a new direction in the cryospheric research.
Technique for Radiometer and Antenna Array Calibration - TRAAC
NASA Technical Reports Server (NTRS)
Meyer, Paul; Sims, William; Varnavas, Kosta; McCracken, Jeff; Srinivasan, Karthik; Limaye, Ashutosh; Laymon, Charles; Richeson. James
2012-01-01
Highly sensitive receivers are used to detect minute amounts of emitted electromagnetic energy. Calibration of these receivers is vital to the accuracy of the measurements. Traditional calibration techniques depend on calibration reference internal to the receivers as reference for the calibration of the observed electromagnetic energy. Such methods can only calibrate errors in measurement introduced by the receiver only. The disadvantage of these existing methods is that they cannot account for errors introduced by devices, such as antennas, used for capturing electromagnetic radiation. This severely limits the types of antennas that can be used to make measurements with a high degree of accuracy. Complex antenna systems, such as electronically steerable antennas (also known as phased arrays), while offering potentially significant advantages, suffer from a lack of a reliable and accurate calibration technique. The proximity of antenna elements in an array results in interaction between the electromagnetic fields radiated (or received) by the individual elements. This phenomenon is called mutual coupling. The new calibration method uses a known noise source as a calibration load to determine the instantaneous characteristics of the antenna. The noise source is emitted from one element of the antenna array and received by all the other elements due to mutual coupling. This received noise is used as a calibration standard to monitor the stability of the antenna electronics.
Dsm Based Orientation of Large Stereo Satellite Image Blocks
NASA Astrophysics Data System (ADS)
d'Angelo, P.; Reinartz, P.
2012-07-01
High resolution stereo satellite imagery is well suited for the creation of digital surface models (DSM). A system for highly automated and operational DSM and orthoimage generation based on CARTOSAT-1 imagery is presented, with emphasis on fully automated georeferencing. The proposed system processes level-1 stereo scenes using the rational polynomial coefficients (RPC) universal sensor model. The RPC are derived from orbit and attitude information and have a much lower accuracy than the ground resolution of approximately 2.5 m. In order to use the images for orthorectification or DSM generation, an affine RPC correction is required. In this paper, GCP are automatically derived from lower resolution reference datasets (Landsat ETM+ Geocover and SRTM DSM). The traditional method of collecting the lateral position from a reference image and interpolating the corresponding height from the DEM ignores the higher lateral accuracy of the SRTM dataset. Our method avoids this drawback by using a RPC correction based on DSM alignment, resulting in improved geolocation of both DSM and ortho images. Scene based method and a bundle block adjustment based correction are developed and evaluated for a test site covering the nothern part of Italy, for which 405 Cartosat-1 Stereopairs are available. Both methods are tested against independent ground truth. Checks against this ground truth indicate a lateral error of 10 meters.
Research on a high-precision calibration method for tunable lasers
NASA Astrophysics Data System (ADS)
Xiang, Na; Li, Zhengying; Gui, Xin; Wang, Fan; Hou, Yarong; Wang, Honghai
2018-03-01
Tunable lasers are widely used in the field of optical fiber sensing, but nonlinear tuning exists even for zero external disturbance and limits the accuracy of the demodulation. In this paper, a high-precision calibration method for tunable lasers is proposed. A comb filter is introduced and the real-time output wavelength and scanning rate of the laser are calibrated by linear fitting several time-frequency reference points obtained from it, while the beat signal generated by the auxiliary interferometer is interpolated and frequency multiplied to find more accurate zero crossing points, with these points being used as wavelength counters to resample the comb signal to correct the nonlinear effect, which ensures that the time-frequency reference points of the comb filter are linear. A stability experiment and a strain sensing experiment verify the calibration precision of this method. The experimental result shows that the stability and wavelength resolution of the FBG demodulation can reach 0.088 pm and 0.030 pm, respectively, using a tunable laser calibrated by the proposed method. We have also compared the demodulation accuracy in the presence or absence of the comb filter, with the result showing that the introduction of the comb filter results to a 15-fold wavelength resolution enhancement.
Accuracy and coverage of the modernized Polish Maritime differential GPS system
NASA Astrophysics Data System (ADS)
Specht, Cezary
2011-01-01
The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.
Yokoo, Takeshi; Bydder, Mark; Hamilton, Gavin; Middleton, Michael S.; Gamst, Anthony C.; Wolfson, Tanya; Hassanein, Tarek; Patton, Heather M.; Lavine, Joel E.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2009-01-01
Purpose: To assess the accuracy of four fat quantification methods at low-flip-angle multiecho gradient-recalled-echo (GRE) magnetic resonance (MR) imaging in nonalcoholic fatty liver disease (NAFLD) by using MR spectroscopy as the reference standard. Materials and Methods: In this institutional review board–approved, HIPAA-compliant prospective study, 110 subjects (29 with biopsy-confirmed NAFLD, 50 overweight and at risk for NAFLD, and 31 healthy volunteers) (mean age, 32.6 years ± 15.6 [standard deviation]; range, 8–66 years) gave informed consent and underwent MR spectroscopy and GRE MR imaging of the liver. Spectroscopy involved a long repetition time (to suppress T1 effects) and multiple echo times (to estimate T2 effects); the reference fat fraction (FF) was calculated from T2-corrected fat and water spectral peak areas. Imaging involved a low flip angle (to suppress T1 effects) and multiple echo times (to estimate T2* effects); imaging FF was calculated by using four analysis methods of progressive complexity: dual echo, triple echo, multiecho, and multiinterference. All methods except dual echo corrected for T2* effects. The multiinterference method corrected for multiple spectral interference effects of fat. For each method, the accuracy for diagnosis of fatty liver, as defined with a spectroscopic threshold, was assessed by estimating sensitivity and specificity; fat-grading accuracy was assessed by comparing imaging and spectroscopic FF values by using linear regression. Results: Dual-echo, triple-echo, multiecho, and multiinterference methods had a sensitivity of 0.817, 0.967, 0.950, and 0.983 and a specificity of 1.000, 0.880, 1.000, and 0.880, respectively. On the basis of regression slope and intercept, the multiinterference (slope, 0.98; intercept, 0.91%) method had high fat-grading accuracy without statistically significant error (P > .05). Dual-echo (slope, 0.98; intercept, −2.90%), triple-echo (slope, 0.94; intercept, 1.42%), and multiecho (slope, 0.85; intercept, −0.15%) methods had statistically significant error (P < .05). Conclusion: Relaxation- and interference-corrected fat quantification at low-flip-angle multiecho GRE MR imaging provides high diagnostic and fat-grading accuracy in NAFLD. © RSNA, 2009 PMID:19221054
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration system for the reflected solar portion of CLARREO. SOLARIS provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections.
Kaynar, Ozgur; Karapinar, Tolga; Hayirli, Armagan; Baydar, Ersoy
2015-12-01
Data on accuracy and precision of the Lactate Scout point-of-care (POC) analyzer in ovine medicine are lacking. The purpose of the study was to assess the reliability of the Lactate Scout in sheep. Fifty-seven sheep at varying ages with various diseases were included. Blood lactate concentration in samples collected from the jugular vein was measured immediately on the Lactate Scout. Plasma L-lactate concentration was measured by the Cobas autoanalyzer as the reference method. Data were subjected to Student's t-test, Passing-Bablok regression, and Bland-Altman plot analyses for comparison and assessment of accuracy, precision, and reliability. Plasma l-lactate concentration was consistently lower than blood L-lactate concentration (3.06 ± 0.24 vs 3.3 ± 0.3 mmol/L, P < .0001). There was a positive correlation between plasma and blood L-lactate concentrations (r = .98, P < .0001). The Lactate Scout had 99% accuracy and 98% precision with the reference method. Blood (Y) and plasma (X) L-lactate concentrations were fitted to Y = 0.28 + 1.00 · X, with a residual standard deviation of 0.31 and a negligible deviation from the identity line (P = .93). The bias was fitted to Y = 0.10 + 0.05 · X, with Sy.x of 0.44 (P < .07). The Lactate Scout has high accuracy and precision, with a negligible bias. It is a reliable POC analyzer to assess L-lactate concentration in ovine medicine. © 2015 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Bratic, G.; Brovelli, M. A.; Molinari, M. E.
2018-04-01
The availability of thematic maps has significantly increased over the last few years. Validation of these maps is a key factor in assessing their suitability for different applications. The evaluation of the accuracy of classified data is carried out through a comparison with a reference dataset and the generation of a confusion matrix from which many quality indexes can be derived. In this work, an ad hoc free and open source Python tool was implemented to automatically compute all the matrix confusion-derived accuracy indexes proposed by literature. The tool was integrated into GRASS GIS environment and successfully applied to evaluate the quality of three high-resolution global datasets (GlobeLand30, Global Urban Footprint, Global Human Settlement Layer Built-Up Grid) in the Lombardy Region area (Italy). In addition to the most commonly used accuracy measures, e.g. overall accuracy and Kappa, the tool allowed to compute and investigate less known indexes such as the Ground Truth and the Classification Success Index. The promising tool will be further extended with spatial autocorrelation analysis functions and made available to researcher and user community.
Diagnostic Accuracy of the Neck Tornado Test as a New Screening Test in Cervical Radiculopathy.
Park, Juyeon; Park, Woo Young; Hong, Seungbae; An, Jiwon; Koh, Jae Chul; Lee, Youn-Woo; Kim, Yong Chan; Choi, Jong Bum
2017-01-01
The Spurling test, although a highly specific provocative test of the cervical spine in cervical radiculopathy (CR), has low to moderate sensitivity. Thus, we introduced the neck tornado test (NTT) to examine the neck and the cervical spine in CR. The aim of this study was to introduce a new provocative test, the NTT, and compare the diagnostic accuracy with a widely accepted provocative test, the Spurling test. Retrospective study. Medical records of 135 subjects with neck pain (CR, n = 67; without CR, n = 68) who had undergone cervical spine magnetic resonance imaging and been referred to the pain clinic between September 2014 and August 2015 were reviewed. Both the Spurling test and NTT were performed in all patients by expert examiners. Sensitivity, specificity, and accuracy were compared for both the Spurling test and the NTT. The sensitivity of the Spurling test and the NTT was 55.22% and 85.07% ( P < 0.0001); specificity, 98.53% and 86.76% ( P = 0.0026); accuracy, 77.04% and 85.93% ( P = 0.0423), respectively. The NTT is more sensitive with superior diagnostic accuracy for CR diagnosed by magnetic resonance imaging than the Spurling test.
MTO-like reference mask modeling for advanced inverse lithography technology patterns
NASA Astrophysics Data System (ADS)
Park, Jongju; Moon, Jongin; Son, Suein; Chung, Donghoon; Kim, Byung-Gook; Jeon, Chan-Uk; LoPresti, Patrick; Xue, Shan; Wang, Sonny; Broadbent, Bill; Kim, Soonho; Hur, Jiuk; Choo, Min
2017-07-01
Advanced Inverse Lithography Technology (ILT) can result in mask post-OPC databases with very small address units, all-angle figures, and very high vertex counts. This creates mask inspection issues for existing mask inspection database rendering. These issues include: large data volumes, low transfer rate, long data preparation times, slow inspection throughput, and marginal rendering accuracy leading to high false detections. This paper demonstrates the application of a new rendering method including a new OASIS-like mask inspection format, new high-speed rendering algorithms, and related hardware to meet the inspection challenges posed by Advanced ILT masks.
Matrices pattern using FIB; 'Out-of-the-box' way of thinking.
Fleger, Y; Gotlib-Vainshtein, K; Talyosef, Y
2017-03-01
Focused ion beam (FIB) is an extremely valuable tool in nanopatterning and nanofabrication for potentially high-resolution patterning, especially when refers to He ion beam microscopy. The work presented here demonstrates an 'out-of-the-box' method of writing using FIB, which enables creating very large matrices, up to the beam-shift limitation, in short times and with high accuracy unachievable by any other writing technique. The new method allows combining different shapes in nanometric dimensions and high resolutions for wide ranges. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Salleh, M. R. M.; Ismail, Z.; Rahman, M. Z. A.
2015-10-01
Airborne Light Detection and Ranging (LiDAR) technology has been widely used recent years especially in generating high accuracy of Digital Terrain Model (DTM). High density and good quality of airborne LiDAR data promises a high quality of DTM. This study focussing on the analysing the error associated with the density of vegetation cover (canopy cover) and terrain slope in a LiDAR derived-DTM value in a tropical forest environment in Bentong, State of Pahang, Malaysia. Airborne LiDAR data were collected can be consider as low density captured by Reigl system mounted on an aircraft. The ground filtering procedure use adaptive triangulation irregular network (ATIN) algorithm technique in producing ground points. Next, the ground control points (GCPs) used in generating the reference DTM and these DTM was used for slope classification and the point clouds belong to non-ground are then used in determining the relative percentage of canopy cover. The results show that terrain slope has high correlation for both study area (0.993 and 0.870) with the RMSE of the LiDAR-derived DTM. This is similar to canopy cover where high value of correlation (0.989 and 0.924) obtained. This indicates that the accuracy of airborne LiDAR-derived DTM is significantly affected by terrain slope and canopy caver of study area.
NASA Astrophysics Data System (ADS)
Mandic, M.; Stöbener, N.; Smajgl, D.
2017-12-01
For many decades different instrumental methods involving generations of the isotope ratio mass spectrometers with different periphery units for sample preparation, have provided scientifically required high precision, and high throughput of samples for varies application - from geological and hydrological to food and forensic. With this work we introduce automated measurement of δ13C and δ18O from solid carbonate samples, DIC and δ18O of water. We have demonstrated usage of a Thermo Scientific™ Delta Ray™ IRIS with URI Connect on certified reference materials and confirmed the high achievable accuracy and a precision better then <0.1‰ for both δ13C and δ18O, in the laboratory or the field with same precision and throughput of samples. With equilibration method for determination of δ18O in water samples, which we present in this work, achieved repeatability and accuracy are 0.12‰ and 0.68‰ respectively, which fulfill requirements of regulatory methods. The preparation of the samples for carbonate and DIC analysis on the Delta Ray IRIS with URI Connect is similar to the previously mentioned Gas Bench II methods. Samples are put into vials and phosphoric acid is added. The resulting sample-acid chemical reaction releases CO2 gas, which is then introduced into the Delta Ray IRIS via the Variable Volume. Three international standards of carbonate materials (NBS-18, NBS-19 and IAEA-CO-1) were analyzed. NBS-18 and NBS-19 were used as standards for calibration, and IAEA-CO-1 was treated as unknown. For water sample analysis equilibration method with 1% of CO2 in dry air was used. Test measurements and conformation of precision and accuracy of method determination δ18O in water samples were done with three lab standards, namely ANST, OCEAN 2 and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values. The Principle of Identical Treatment was applied in sample and standard preparation, in measurement procedure, as well as in the evaluation of the results.
Accuracy of Self-Evaluation in Adults with ADHD: Evidence from a Driving Study
ERIC Educational Resources Information Center
Knouse, Laura E.; Bagwell, Catherine L.; Barkley, Russell A.; Murphy, Kevin R.
2005-01-01
Research on children with ADHD indicates an association with inaccuracy of self-appraisal. This study examines the accuracy of self-evaluations in clinic-referred adults diagnosed with ADHD. Self-assessments and performance measures of driving in naturalistic settings and on a virtual-reality driving simulator are used to assess accuracy of…
A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services
NASA Astrophysics Data System (ADS)
Malinowski, Marcin; Kwiecień, Janusz
2016-12-01
Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.
NASA Astrophysics Data System (ADS)
Zhou, Lei; Li, Zhengying; Xiang, Na; Bao, Xiaoyi
2018-06-01
A high speed quasi-distributed demodulation method based on the microwave photonics and the chromatic dispersion effect is designed and implemented for weak fiber Bragg gratings (FBGs). Due to the effect of dispersion compensation fiber (DCF), FBG wavelength shift leads to the change of the difference frequency signal at the mixer. With the way of crossing microwave sweep cycle, all wavelengths of cascade FBGs can be high speed obtained by measuring the frequencies change. Moreover, through the introduction of Chirp-Z and Hanning window algorithm, the analysis of difference frequency signal is achieved very well. By adopting the single-peak filter as a reference, the length disturbance of DCF caused by temperature can be also eliminated. Therefore, the accuracy of this novel method is greatly improved, and high speed demodulation of FBGs can easily realize. The feasibility and performance are experimentally demonstrated using 105 FBGs with 0.1% reflectivity, 1 m spatial interval. Results show that each grating can be distinguished well, and the demodulation rate is as high as 40 kHz, the accuracy is about 8 pm.
NASA Astrophysics Data System (ADS)
Bonforte, A.; Casu, F.; de Martino, P.; Guglielmino, F.; Lanari, R.; Manzo, M.; Obrizzo, F.; Puglisi, G.; Sansosti, E.; Tammaro, U.
2009-04-01
Differential Synthetic Aperture Radar Interferometry (DInSAR) is a methodology able to measure ground deformation rates and time series of relatively large areas. Several different approaches have been developed over the past few years: they all have in common the capability to measure deformations on a relatively wide area (say 100 km by 100 km) with a high density of the measuring points. For these reasons, DInSAR represents a very useful tool for investigating geophysical phenomena, with particular reference to volcanic areas. As for any measuring technique, the knowledge of the attainable accuracy is of fundamental importance. In the case of DInSAR technology, we have several error sources, such as orbital inaccuracies, phase unwrapping errors, atmospheric artifacts, effects related to the reference point selection, thus making very difficult to define a theoretical error model. A practical way to obtain assess the accuracy is to compare DInSAR results with independent measurements, such as GPS or levelling. Here we present an in-deep comparison between the deformation measurement obtained by exploiting the DInSAR technique referred to as Small BAseline Subset (SBAS) algorithm and by continuous GPS stations. The selected volcanic test-sites are Etna, Vesuvio and Campi Flegrei, in Italy. From continuous GPS data, solutions are computed at the same days SAR data are acquired for direct comparison. Moreover, three dimensional GPS displacement vectors are projected along the radar line of sight of both ascending and descending acquisition orbits. GPS data are then compared with the coherent DInSAR pixels closest to the GPS station. Relevant statistics of the differences between the two measurements are computed and correlated to some scene parameter that may affect DInSAR accuracy (altitude, terrain slope, etc.).
Automatic alignment of double optical paths in excimer laser amplifier
NASA Astrophysics Data System (ADS)
Wang, Dahui; Zhao, Xueqing; Hua, Hengqi; Zhang, Yongsheng; Hu, Yun; Yi, Aiping; Zhao, Jun
2013-05-01
A kind of beam automatic alignment method used for double paths amplification in the electron pumped excimer laser system is demonstrated. In this way, the beams from the amplifiers can be transferred along the designated direction and accordingly irradiate on the target with high stabilization and accuracy. However, owing to nonexistence of natural alignment references in excimer laser amplifiers, two cross-hairs structure is used to align the beams. Here, one crosshair put into the input beam is regarded as the near-field reference while the other put into output beam is regarded as the far-field reference. The two cross-hairs are transmitted onto Charge Coupled Devices (CCD) by image-relaying structures separately. The errors between intersection points of two cross-talk images and centroid coordinates of actual beam are recorded automatically and sent to closed loop feedback control mechanism. Negative feedback keeps running until preset accuracy is reached. On the basis of above-mentioned design, the alignment optical path is built and the software is compiled, whereafter the experiment of double paths automatic alignment in electron pumped excimer laser amplifier is carried through. Meanwhile, the related influencing factors and the alignment precision are analyzed. Experimental results indicate that the alignment system can achieve the aiming direction of automatic aligning beams in short time. The analysis shows that the accuracy of alignment system is 0.63μrad and the beam maximum restoration error is 13.75μm. Furthermore, the bigger distance between the two cross-hairs, the higher precision of the system is. Therefore, the automatic alignment system has been used in angular multiplexing excimer Main Oscillation Power Amplification (MOPA) system and can satisfy the requirement of beam alignment precision on the whole.
Ni, Jianlong; Li, Dichen; Mao, Mao; Dang, Xiaoqian; Wang, Kunzheng; He, Jiankang; Shi, Zhibin
2018-02-01
To explore a method of bone tunnel placement for anterior cruciate ligament (ACL) reconstruction based on 3-dimensional (3D) printing technology and to assess its accuracy. Twenty human cadaveric knees were scanned by thin-layer computed tomography (CT). To obtain data on bones used to establish a knee joint model by computer software, customized bone anchors were installed before CT. The reference point was determined at the femoral and tibial footprint areas of the ACL. The site and direction of the bone tunnels of the femur and tibia were designed and calibrated on the knee joint model according to the reference point. The resin template was designed and printed by 3D printing. Placement of the bone tunnels was accomplished by use of templates, and the cadaveric knees were scanned again to compare the concordance of the internal opening of the bone tunnels and reference points. The twenty 3D printing templates were designed and printed successfully. CT data analysis between the planned and actual drilled tunnel positions showed mean deviations of 0.57 mm (range, 0-1.5 mm; standard deviation, 0.42 mm) at the femur and 0.58 mm (range, 0-1.5 mm; standard deviation, 0.47 mm) at the tibia. The accuracy of bone tunnel placement for ACL reconstruction in cadaveric adult knees based on 3D printing technology is high. This method can improve the accuracy of bone tunnel placement for ACL reconstruction in clinical sports medicine. Copyright © 2017 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montes-Rodríguez, María de los Ángeles, E-mail: angy24538@yahoo.com; Mitsoura, Eleni; Hernández-Bojórquez, Mariana
2014-11-07
Stereotactic Body Radiation Therapy (SBRT) requires a controlled immobilization and position monitoring of patient and target. The purpose of this work is to analyze the performance of the imaging system ExacTrac® (ETX) using infrared and fiducial markers. Materials and methods: In order to assure the accuracy of isocenter localization, a Quality Assurance procedure was applied using an infrared marker-based positioning system. Scans were acquired of an inhouse-agar gel and solid water phantom with infrared spheres. In the inner part of the phantom, three reference markers were delineated as reference and one pellet was place internally; which was assigned as themore » isocenter. The iPlan® RT Dose treatment planning system. Images were exported to the ETX console. Images were acquired with the ETX to check the correctness of the isocenter placement. Adjustments were made in 6D the reference markers were used to fuse the images. Couch shifts were registered. The procedure was repeated for verification purposes. Results: The data recorded of the verifications in translational and rotational movements showed averaged 3D spatial uncertainties of 0.31 ± 0.42 mm respectively 0.82° ± 0.46° in the phantom and the first correction of these uncertainties were of 1.51 ± 1.14 mm respectively and 1.37° ± 0.61°. Conclusions: This study shows a high accuracy and repeatability in positioning the selected isocenter. The ETX-system for verifying the treatment isocenter position has the ability to monitor the tracing position of interest, making it possible to be used for SBRT positioning within uncertainty ≤1mm.« less
Assessment of the Thematic Accuracy of Land Cover Maps
NASA Astrophysics Data System (ADS)
Höhle, J.
2015-08-01
Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.
The system of high accuracy UV spectral radiation system
NASA Astrophysics Data System (ADS)
Lin, Guan-yu; Yu, Lei; Xu, Dian; Cao, Dian-sheng; Yu, Yu-Xiang
2016-10-01
UV spectral radiation detecting and visible observation telescope is designed by the coaxial optical. In order to decrease due to the incident light polarization effect, and improve the detection precision, polarizer need to be used in the light path. Four pieces of quartz of high Precision UV radiation depolarizer retarder stack together is placed in front of Seya namioka dispersion unit. The coherent detection principle of modulation of light signal and the reference signal multiplied processing, increase the phase sensitive detector can be adjustment function, ensure the UV spectral radiation detection stability. A lock-in amplifier is used in the electrical system to advance the accuracy of measurement. To ensure the precision measurement detected, the phase-sensitive detector function can be adjustable. the output value is not more than 10mV before each measurement, so it can be ensured that the stability of the measured radiation spectrum is less than 1 percent.
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
Determination of the accuracy and operating constants in a digitally biased ring core magnetometer
Green, A.W.
1990-01-01
By using a very stable voltage reference and a high precision digital-to-analog converter to set bias in digital increments, the inherently high stability and accuracy of a ring core magnetometer can be significantly enhanced. In this case it becomes possible to measure not only variations about the bias level, but to measure the entire value of the field along each magnetometer sensing axis in a nearly absolute sense. To accomplish this, one must accurately determine the value of the digital bias increment for each axis, the zero field offset value for each axis, the scale values, and the transfer coefficients (or nonorthogonality angles) for pairs of axes. This determination can be carried out very simply, using only the Earth's field, a proton magnetometer, and a tripod-mounted fixture which is capable of rotations about two axes that are mutually perpendicular to the Earth's magnetic field vector. ?? 1990.
NASA/Cousteau ocean bathymetry experiment. Remote bathymetry using high gain LANDSAT data
NASA Technical Reports Server (NTRS)
Polcyn, F. C.
1976-01-01
Satellite remote bathymetry was varified to 22 m depths where water clarity was defined by alpha = .058 1/m and bottom reflection, r(b), was 26%. High gain band 4 and band 5 CCT data from LANDSAT 1 was used for a test site in the Bahama Islands and near Florida. Near Florida where alpha = .11 1/m and r(b) = 20%, depths to 10 m were verified. Depth accuracies within 10% rms were achieved. Position accuracies within one LANDSAT pixel were obtained by reference to the Transit navigation satellites. The Calypso and the Beayondan, two ships, were at anchor on each of the seven days during LANDSAT 1 and 2 overpasses: LORAN C position information was used when the ships were underway making depth transects. Results are expected to be useful for updating charts showing shoals hazardous to navigation or in monitoring changes in nearshore topography.
Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars
2014-01-01
Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018
Michelessi, Manuele; Lucenteforte, Ersilia; Miele, Alba; Oddone, Francesco; Crescioli, Giada; Fameli, Valeria; Korevaar, Daniël A; Virgili, Gianni
2017-01-01
Research has shown a modest adherence of diagnostic test accuracy (DTA) studies in glaucoma to the Standards for Reporting of Diagnostic Accuracy Studies (STARD). We have applied the updated 30-item STARD 2015 checklist to a set of studies included in a Cochrane DTA systematic review of imaging tools for diagnosing manifest glaucoma. Three pairs of reviewers, including one senior reviewer who assessed all studies, independently checked the adherence of each study to STARD 2015. Adherence was analyzed on an individual-item basis. Logistic regression was used to evaluate the effect of publication year and impact factor on adherence. We included 106 DTA studies, published between 2003-2014 in journals with a median impact factor of 2.6. Overall adherence was 54.1% for 3,286 individual rating across 31 items, with a mean of 16.8 (SD: 3.1; range 8-23) items per study. Large variability in adherence to reporting standards was detected across individual STARD 2015 items, ranging from 0 to 100%. Nine items (1: identification as diagnostic accuracy study in title/abstract; 6: eligibility criteria; 10: index test (a) and reference standard (b) definition; 12: cut-off definitions for index test (a) and reference standard (b); 14: estimation of diagnostic accuracy measures; 21a: severity spectrum of diseased; 23: cross-tabulation of the index and reference standard results) were adequately reported in more than 90% of the studies. Conversely, 10 items (3: scientific and clinical background of the index test; 11: rationale for the reference standard; 13b: blinding of index test results; 17: analyses of variability; 18; sample size calculation; 19: study flow diagram; 20: baseline characteristics of participants; 28: registration number and registry; 29: availability of study protocol; 30: sources of funding) were adequately reported in less than 30% of the studies. Only four items showed a statistically significant improvement over time: missing data (16), baseline characteristics of participants (20), estimates of diagnostic accuracy (24) and sources of funding (30). Adherence to STARD 2015 among DTA studies in glaucoma research is incomplete, and only modestly increasing over time.
Star Tracker Based ATP System Conceptual Design and Pointing Accuracy Estimation
NASA Technical Reports Server (NTRS)
Orfiz, Gerardo G.; Lee, Shinhak
2006-01-01
A star tracker based beaconless (a.k.a. non-cooperative beacon) acquisition, tracking and pointing concept for precisely pointing an optical communication beam is presented as an innovative approach to extend the range of high bandwidth (> 100 Mbps) deep space optical communication links throughout the solar system and to remove the need for a ground based high power laser as a beacon source. The basic approach for executing the ATP functions involves the use of stars as the reference sources from which the attitude knowledge is obtained and combined with high bandwidth gyroscopes for propagating the pointing knowledge to the beam pointing mechanism. Details of the conceptual design are presented including selection of an orthogonal telescope configuration and the introduction of an optical metering scheme to reduce misalignment error. Also, estimates are presented that demonstrate that aiming of the communications beam to the Earth based receive terminal can be achieved with a total system pointing accuracy of better than 850 nanoradians (3 sigma) from anywhere in the solar system.
Amokrane, S; Ayadim, A; Malherbe, J G
2005-11-01
A simple modification of the reference hypernetted chain (RHNC) closure of the multicomponent Ornstein-Zernike equations with bridge functions taken from Rosenfeld's hard-sphere bridge functional is proposed. Its main effect is to remedy the major limitation of the RHNC closure in the case of highly asymmetric mixtures--the wide domain of packing fractions in which it has no solution. The modified closure is also much faster, while being of similar complexity. This is achieved with a limited loss of accuracy, mainly for the contact value of the big sphere correlation functions. Comparison with simulation shows that inside the RHNC no-solution domain, it provides a good description of the structure, while being clearly superior to all the other closures used so far to study highly asymmetric mixtures. The generic nature of this closure and its good accuracy combined with a reduced no-solution domain open up the possibility to study the phase diagram of complex fluids beyond the hard-sphere model.
Lewis, Jane Ea; Williams, Paul; Davies, Jane H
2016-01-01
This cross-sectional study aimed to individually and cumulatively compare sensitivity and specificity of the (1) ankle brachial index and (2) pulse volume waveform analysis recorded by the same automated device, with the presence or absence of peripheral arterial disease being verified by ultrasound duplex scan. Patients (n=205) referred for lower limb arterial assessment underwent ankle brachial index measurement and pulse volume waveform recording using volume plethysmography, followed by ultrasound duplex scan. The presence of peripheral arterial disease was recorded if ankle brachial index <0.9; pulse volume waveform was graded as 2, 3 or 4; or if haemodynamically significant stenosis >50% was evident with ultrasound duplex scan. Outcome measure was agreement between the measured ankle brachial index and interpretation of pulse volume waveform for peripheral arterial disease diagnosis, using ultrasound duplex scan as the reference standard. Sensitivity of ankle brachial index was 79%, specificity 91% and overall accuracy 88%. Pulse volume waveform sensitivity was 97%, specificity 81% and overall accuracy 85%. The combined sensitivity of ankle brachial index and pulse volume waveform was 100%, specificity 76% and overall accuracy 85%. Combining these two diagnostic modalities within one device provided a highly accurate method of ruling out peripheral arterial disease, which could be utilised in primary care to safely reduce unnecessary secondary care referrals.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-01-01
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988
Validation of a 3D CT method for measurement of linear wear of acetabular cups.
Jedenmalm, Anneli; Nilsson, Fritjof; Noz, Marilyn E; Green, Douglas D; Gedde, Ulf W; Clarke, Ian C; Stark, Andreas; Maguire, Gerald Q; Zeleznik, Michael P; Olivecrona, Henrik
2011-02-01
We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy.
Yoon, Jong Lull; Cho, Jung Jin; Park, Kyung Mi; Noh, Hye Mi; Park, Yong Soon
2015-02-01
Associations between body mass index (BMI), body fat percentage (BF%), and health risks differ between Asian and European populations. BMI is commonly used to diagnose obesity; however, its accuracy in detecting adiposity in Koreans is unknown. The present cross-sectional study aimed at assessing the accuracy of BMI in determining BF%-defined obesity in 6,017 subjects (age 20-69 yr, 43.6% men) from the 2009 Korean National Health and Nutrition Examination Survey. We assessed the diagnostic performance of BMI using the Western Pacific Regional Office of World Health Organization reference standard for BF%-defined obesity by sex and age and identified the optimal BMI cut-off for BF%-defined obesity using receiver operating characteristic curve analysis. BMI-defined obesity (≥25 kg/m(2)) was observed in 38.7% of men and 28.1% of women, with a high specificity (89%, men; 84%, women) but poor sensitivity (56%, men; 72% women) for BF%-defined obesity (25.2%, men; 31.1%, women). The optimal BMI cut-off (24.2 kg/m(2)) had 78% sensitivity and 71% specificity. BMI demonstrated limited diagnostic accuracy for adiposity in Korea. There was a -1.3 kg/m(2) difference in optimal BMI cut-offs between Korea and America, smaller than the 5-unit difference between the Western Pacific Regional Office and global World Health Organization obesity criteria.
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Lim, Jeong-Hwan; Kim, Do-Won; Im, Chang-Hwan
2014-07-01
A number of recent studies have demonstrated that near-infrared spectroscopy (NIRS) is a promising neuroimaging modality for brain-computer interfaces (BCIs). So far, most NIRS-based BCI studies have focused on enhancing the accuracy of the classification of different mental tasks. In the present study, we evaluated the performances of a variety of mental task combinations in order to determine the mental task pairs that are best suited for customized NIRS-based BCIs. To this end, we recorded event-related hemodynamic responses while seven participants performed eight different mental tasks. Classification accuracies were then estimated for all possible pairs of the eight mental tasks (C=28). Based on this analysis, mental task combinations with relatively high classification accuracies frequently included the following three mental tasks: "mental multiplication," "mental rotation," and "right-hand motor imagery." Specifically, mental task combinations consisting of two of these three mental tasks showed the highest mean classification accuracies. It is expected that our results will be a useful reference to reduce the time needed for preliminary tests when discovering individual-specific mental task combinations.
Adjustable electronic load-alarm relay
Mason, Charles H.; Sitton, Roy S.
1976-01-01
This invention is an improved electronic alarm relay for monitoring the current drawn by an AC motor or other electrical load. The circuit is designed to measure the load with high accuracy and to have excellent alarm repeatability. Chattering and arcing of the relay contacts are minimal. The operator can adjust the set point easily and can re-set both the high and the low alarm points by means of one simple adjustment. The relay includes means for generating a signal voltage proportional to the motor current. In a preferred form of the invention a first operational amplifier is provided to generate a first constant reference voltage which is higher than a preselected value of the signal voltage. A second operational amplifier is provided to generate a second constant reference voltage which is lower than the aforementioned preselected value of the signal voltage. A circuit comprising a first resistor serially connected to a second resistor is connected across the outputs of the first and second amplifiers, and the junction of the two resistors is connected to the inverting terminal of the second amplifier. Means are provided to compare the aforementioned signal voltage with both the first and second reference voltages and to actuate an alarm if the signal voltage is higher than the first reference voltage or lower than the second reference voltage.
Unsupervised change detection in a particular vegetation land cover type using spectral angle mapper
NASA Astrophysics Data System (ADS)
Renza, Diego; Martinez, Estibaliz; Molina, Iñigo; Ballesteros L., Dora M.
2017-04-01
This paper presents a new unsupervised change detection methodology for multispectral images applied to specific land covers. The proposed method involves comparing each image against a reference spectrum, where the reference spectrum is obtained from the spectral signature of the type of coverage you want to detect. In this case the method has been tested using multispectral images (SPOT5) of the community of Madrid (Spain), and multispectral images (Quickbird) of an area over Indonesia that was impacted by the December 26, 2004 tsunami; here, the tests have focused on the detection of changes in vegetation. The image comparison is obtained by applying Spectral Angle Mapper between the reference spectrum and each multitemporal image. Then, a threshold to produce a single image of change is applied, which corresponds to the vegetation zones. The results for each multitemporal image are combined through an exclusive or (XOR) operation that selects vegetation zones that have changed over time. Finally, the derived results were compared against a supervised method based on classification with the Support Vector Machine. Furthermore, the NDVI-differencing and the Spectral Angle Mapper techniques were selected as unsupervised methods for comparison purposes. The main novelty of the method consists in the detection of changes in a specific land cover type (vegetation), therefore, for comparison purposes, the best scenario is to compare it with methods that aim to detect changes in a specific land cover type (vegetation). This is the main reason to select NDVI-based method and the post-classification method (SVM implemented in a standard software tool). To evaluate the improvements using a reference spectrum vector, the results are compared with the basic-SAM method. In SPOT5 image, the overall accuracy was 99.36% and the κ index was 90.11%; in Quickbird image, the overall accuracy was 97.5% and the κ index was 82.16%. Finally, the precision results of the method are comparable to those of a supervised method, supported by low detection of false positives and false negatives, along with a high overall accuracy and a high kappa index. On the other hand, the execution times were comparable to those of unsupervised methods of low computational load.
A new method for measuring the rotational accuracy of rolling element bearings
NASA Astrophysics Data System (ADS)
Chen, Ye; Zhao, Xiangsong; Gao, Weiguo; Hu, Gaofeng; Zhang, Shizhen; Zhang, Dawei
2016-12-01
The rotational accuracy of a machine tool spindle has critical influence upon the geometric shape and surface roughness of finished workpiece. The rotational performance of the rolling element bearings is a main factor which affects the spindle accuracy, especially in the ultra-precision machining. In this paper, a new method is developed to measure the rotational accuracy of rolling element bearings of machine tool spindles. Variable and measurable axial preload is applied to seat the rolling elements in the bearing races, which is used to simulate the operating conditions. A high-precision (radial error is less than 300 nm) and high-stiffness (radial stiffness is 600 N/μm) hydrostatic reference spindle is adopted to rotate the inner race of the test bearing. To prevent the outer race from rotating, a 2-degrees of freedom flexure hinge mechanism (2-DOF FHM) is designed. Correction factors by using stiffness analysis are adopted to eliminate the influences of 2-DOF FHM in the radial direction. Two capacitive displacement sensors with nano-resolution (the highest resolution is 9 nm) are used to measure the radial error motion of the rolling element bearing, without separating the profile error as the traditional rotational accuracy metrology of the spindle. Finally, experimental measurements are performed at different spindle speeds (100-4000 rpm) and axial preloads (75-780 N). Synchronous and asynchronous error motion values are evaluated to demonstrate the feasibility and repeatability of the developed method and instrument.
Sim, Ji-Young; Jang, Yeon; Kim, Woong-Chul; Kim, Hae-Young; Lee, Dong-Hwan; Kim, Ji-Hwan
2018-03-31
This study aimed to evaluate and compare the accuracy. A reference model was prepared with three prepared teeth for three types of restorations: single crown, 3-unit bridge, and inlay. Stone models were fabricated from conventional impressions. Digital impressions of the reference model were created using an intraoral scanner (digital models). Physical models were fabricated using a three-dimensional (3D) printer. Reference, stone, and 3D printed models were subsequently scanned using an industrial optical scanner; files were exported in a stereolithography file format. All datasets were superimposed using 3D analysis software to evaluate the accuracy of the complete arch and trueness of the preparations. One-way and two-way analyses of variance (ANOVA) were performed to compare the accuracy among the three model groups and evaluate the trueness among the three types of preparation. For the complete arch, significant intergroup differences in precision were observed for the three groups (p<.001). However, no significant difference in trueness was found between the stone and digital models (p>.05). 3D printed models had the poorest accuracy. A two-way ANOVA revealed significant differences in trueness among the model groups (p<.001) and types of preparation (p<.001). Digital models had smaller root mean square values of trueness of the complete arch and preparations than stone models. However, the accuracy of the complete arch and trueness of the preparations of 3D printed models were inferior to those of the other groups. Copyright © 2018 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.
Lebel, Karina; Hamel, Mathieu; Duval, Christian; Nguyen, Hung; Boissy, Patrick
2018-01-01
Joint kinematics can be assessed using orientation estimates from Attitude and Heading Reference Systems (AHRS). However, magnetically-perturbed environments affect the accuracy of the estimated orientations. This study investigates, both in controlled and human mobility conditions, a trial calibration technic based on a 2D photograph with a pose estimation algorithm to correct initial difference in AHRS Inertial reference frames and improve joint angle accuracy. In controlled conditions, two AHRS were solidly affixed onto a wooden stick and a series of static and dynamic trials were performed in varying environments. Mean accuracy of relative orientation between the two AHRS was improved from 24.4° to 2.9° using the proposed correction method. In human conditions, AHRS were placed on the shank and the foot of a participant who performed repeated trials of straight walking and walking while turning, varying the level of magnetic perturbation in the starting environment and the walking speed. Mean joint orientation accuracy went from 6.7° to 2.8° using the correction algorithm. The impact of starting environment was also greatly reduced, up to a point where one could consider it as non-significant from a clinical point of view (maximum mean difference went from 8° to 0.6°). The results obtained demonstrate that the proposed method improves significantly the mean accuracy of AHRS joint orientation estimations in magnetically-perturbed environments and can be implemented in post processing of AHRS data collected during biomechanical evaluation of motion. Copyright © 2017 Elsevier B.V. All rights reserved.
A traceability procedure has been established which allows specialty gas producers to prepare gaseous pollutant Certified Reference Materials (CRMs). The accuracy, stability and homogeneity of the CRMs approach those of NBS Standard Reference Materials (SRMs). Part of this proced...
A traceability procedure has been established which allows specialty gas producers to prepare gaseous pollutant Certified Reference Materials (CRM's). The accuracy, stability and homogeneity of the CRM's approach those of NBS Standard Reference Materials (SRM's). As of October 19...
NASA Astrophysics Data System (ADS)
Liu, J.-C.; Malkin, Z.; Zhu, Z.
2018-03-01
The International Celestial Reference Frame (ICRF) is currently realized by the very long baseline interferometry (VLBI) observations of extragalactic sources with the zero proper motion assumption, while Gaia will observe proper motions of these distant and faint objects to an accuracy of tens of microarcseconds per year. This paper investigates the difference between VLBI and Gaia quasar proper motions and it aims to understand the impact of quasar proper motions on the alignment of the ICRF and Gaia reference frame. We use the latest time series data of source coordinates from the International VLBI Service analysis centres operated at Goddard Space Flight Center (GSF2017) and Paris observatory (OPA2017), as well as the Gaia auxiliary quasar solution containing 2191 high-probability optical counterparts of the ICRF2 sources. The linear proper motions in right ascension and declination of VLBI sources are derived by least-squares fits while the proper motions for Gaia sources are simulated taking into account the acceleration of the Solar system barycentre and realistic uncertainties depending on the source brightness. The individual and global features of source proper motions in GSF2017 and OPA2017 VLBI data are found to be inconsistent, which may result from differences in VLBI observations, data reduction and analysis. A comparison of the VLBI and Gaia proper motions shows that the accuracies of the components of rotation and glide between the two systems are 2-4 μas yr- 1 based on about 600 common sources. For the future alignment of the ICRF and Gaia reference frames at different wavelengths, the proper motions of quasars must necessarily be considered.
[The water content reference material of water saturated octanol].
Wang, Haifeng; Ma, Kang; Zhang, Wei; Li, Zhanyuan
2011-03-01
The national standards of biofuels specify the technique specification and analytical methods. A water content certified reference material based on the water saturated octanol was developed in order to satisfy the needs of the instrument calibration and the methods validation, assure the accuracy and consistency of results in water content measurements of biofuels. Three analytical methods based on different theories were employed to certify the water content of the reference material, including Karl Fischer coulometric titration, Karl Fischer volumetric titration and quantitative nuclear magnetic resonance. The consistency of coulometric and volumetric titration was achieved through the improvement of methods. The accuracy of the certified result was improved by the introduction of the new method of quantitative nuclear magnetic resonance. Finally, the certified value of reference material is 4.76% with an expanded uncertainty of 0.09%.
SAIP2014, the 59th Annual Conference of the South African Institute of Physics
NASA Astrophysics Data System (ADS)
Engelbrecht, Chris; Karataglidis, Steven
2015-04-01
The International Celestial Reference Frame (ICRF) was adopted by the International Astronomical Union (IAU) in 1997. The current standard, the ICRF-2, is based on Very Long Baseline Interferometric (VLBI) radio observations of positions of 3414 extragalactic radio reference sources. The angular resolution achieved by the VLBI technique is on a scale of milliarcsecond to sub-milliarcseconds and defines the ICRF with the highest accuracy available at present. An ideal reference source used for celestial reference frame work should be unresolved or point-like on these scales. However, extragalactic radio sources, such as those that definevand maintain the ICRF, can exhibit spatially extended structures on sub-milliarsecond scalesvthat may vary both in time and frequency. This variability can introduce a significant error in the VLBI measurements thereby degrading the accuracy of the estimated source position. Reference source density in the Southern celestial hemisphere is also poor compared to the Northern hemisphere, mainly due to the limited number of radio telescopes in the south. In order to dene the ICRF with the highest accuracy, observational efforts are required to find more compact sources and to monitor their structural evolution. In this paper we show that the astrometric VLBI sessions can be used to obtain source structure information and we present preliminary imaging results for the source J1427-4206 at 2.3 and 8.4 GHz frequencies which shows that the source is compact and suitable as a reference source.
Buzayan, Muaiyed; Baig, Mirza Rustum; Yunus, Norsiah
2013-01-01
This in vitro study evaluated the accuracy of multiple-unit dental implant casts obtained from splinted or nonsplinted direct impression techniques using various splinting materials by comparing the casts to the reference models. The effect of two different impression materials on the accuracy of the implant casts was also evaluated for abutment-level impressions. A reference model with six internal-connection implant replicas placed in the completely edentulous mandibular arch and connected to multi-base abutments was fabricated from heat-curing acrylic resin. Forty impressions of the reference model were made, 20 each with polyether (PE) and polyvinylsiloxane (PVS) impression materials using the open tray technique. The PE and PVS groups were further subdivided into four subgroups of five each on the bases of splinting type: no splinting, bite registration PE, bite registration addition silicone, or autopolymerizing acrylic resin. The positional accuracy of the implant replica heads was measured on the poured casts using a coordinate measuring machine to assess linear differences in interimplant distances in all three axes. The collected data (linear and three-dimensional [3D] displacement values) were compared with the measurements calculated on the reference resin model and analyzed with nonparametric tests (Kruskal-Wallis and Mann-Whitney). No significant differences were found between the various splinting groups for both PE and PVS impression materials in terms of linear and 3D distortions. However, small but significant differences were found between the two impression materials (PVS, 91 μm; PE, 103 μm) in terms of 3D discrepancies, irrespective of the splinting technique employed. Casts obtained from both impression materials exhibited differences from the reference model. The impression material influenced impression inaccuracy more than the splinting material for multiple-unit abutment-level impressions.
Communication: Saturated CO2 absorption near 1.6 μm for kilohertz-accuracy transition frequencies
NASA Astrophysics Data System (ADS)
Burkart, Johannes; Sala, Tommaso; Romanini, Daniele; Marangoni, Marco; Campargue, Alain; Kassi, Samir
2015-05-01
Doppler-free saturated-absorption Lamb dips were measured on weak rovibrational lines of 12C16O2 between 6189 and 6215 cm-1 at sub-Pa pressures using optical feedback frequency stabilized cavity ring-down spectroscopy. By referencing the laser source to an optical frequency comb, transition frequencies for ten lines of the 30013←00001 band P-branch and two lines of the 31113←01101 hot band R-branch were determined with an accuracy of a few parts in 1011. Involving rotational quantum numbers up to 42, the data were used for improving the upper level spectroscopic constants. These results provide a highly accurate reference frequency grid over the spectral interval from 1599 to 1616 nm.
Phase-shifting point diffraction interferometer focus-aid enhanced mask
Naulleau, Patrick
2000-01-01
A phase-shifting point diffraction interferometer system (PS/PDI) employing a PS/PDI mask that includes a PDI focus aid is provided. The PDI focus aid mask includes a large or secondary reference pinhole that is slightly displaced from the true or primary reference pinhole. The secondary pinhole provides a larger capture tolerance for interferometrically performing fine focus. With the focus-aid enhanced mask, conventional methods such as the knife-edge test can be used to perform an initial (or rough) focus and the secondary (large) pinhole is used to perform interferometric fine focus. Once the system is well focused, high accuracy interferometry can be performed using the primary (small) pinhole.
Classification of urban features using airborne hyperspectral data
NASA Astrophysics Data System (ADS)
Ganesh Babu, Bharath
Accurate mapping and modeling of urban environments are critical for their efficient and successful management. Superior understanding of complex urban environments is made possible by using modern geospatial technologies. This research focuses on thematic classification of urban land use and land cover (LULC) using 248 bands of 2.0 meter resolution hyperspectral data acquired from an airborne imaging spectrometer (AISA+) on 24th July 2006 in and near Terre Haute, Indiana. Three distinct study areas including two commercial classes, two residential classes, and two urban parks/recreational classes were selected for classification and analysis. Four commonly used classification methods -- maximum likelihood (ML), extraction and classification of homogeneous objects (ECHO), spectral angle mapper (SAM), and iterative self organizing data analysis (ISODATA) - were applied to each data set. Accuracy assessment was conducted and overall accuracies were compared between the twenty four resulting thematic maps. With the exception of SAM and ISODATA in a complex commercial area, all methods employed classified the designated urban features with more than 80% accuracy. The thematic classification from ECHO showed the best agreement with ground reference samples. The residential area with relatively homogeneous composition was classified consistently with highest accuracy by all four of the classification methods used. The average accuracy amongst the classifiers was 93.60% for this area. When individually observed, the complex recreational area (Deming Park) was classified with the highest accuracy by ECHO, with an accuracy of 96.80% and 96.10% Kappa. The average accuracy amongst all the classifiers was 92.07%. The commercial area with relatively high complexity was classified with the least accuracy by all classifiers. The lowest accuracy was achieved by SAM at 63.90% with 59.20% Kappa. This was also the lowest accuracy in the entire analysis. This study demonstrates the potential for using the visible and near infrared (VNIR) bands from AISA+ hyperspectral data in urban LULC classification. Based on their performance, the need for further research using ECHO and SAM is underscored. The importance incorporating imaging spectrometer data in high resolution urban feature mapping is emphasized.
Computer-Aided Medical Diagnosis. Literature Review
1978-12-15
Croft found a 13% difference in diagnostic accuracy. He considered this difference insignificant in relation to the diagnostic differences caused ...type of diseases diagnosed probably are the major cause of cross-study variability in diagnostic accuracy. The consistency of diagnostic accuracy...REFERENCES ALPEROVITCH, A. and FRAGU, P., A suggestion for an effective use of a computer-aided diagnosis system in screening for hyperthyroidism , Method
The absolute radiometric calibration of the advanced very high resolution radiometer
NASA Technical Reports Server (NTRS)
Slater, P. N.; Teillet, P. M.; Ding, Y.
1988-01-01
The need for independent, redundant absolute radiometric calibration methods is discussed with reference to the Thematic Mapper. Uncertainty requirements for absolute calibration of between 0.5 and 4 percent are defined based on the accuracy of reflectance retrievals at an agricultural site. It is shown that even very approximate atmospheric corrections can reduce the error in reflectance retrieval to 0.02 over the reflectance range 0 to 0.4.
Wang, Hubiao; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-01-01
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1′ × 1′ marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N(u,σ2) with varying mean u and noise variance σ2. Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1–2 mGal accuracy) and the reference map (resolution 1′ × 1′; accuracy 3–8 mGal), location accuracy of IGNS was up to reach ~1.0–3.0 n miles in the South China Sea. PMID:29261136
Wang, Hubiao; Wu, Lin; Chai, Hua; Bao, Lifeng; Wang, Yong
2017-12-20
An experiment comparing the location accuracy of gravity matching-aided navigation in the ocean and simulation is very important to evaluate the feasibility and the performance of an INS/gravity-integrated navigation system (IGNS) in underwater navigation. Based on a 1' × 1' marine gravity anomaly reference map and multi-model adaptive Kalman filtering algorithm, a matching location experiment of IGNS was conducted using data obtained using marine gravimeter. The location accuracy under actual ocean conditions was 2.83 nautical miles (n miles). Several groups of simulated data of marine gravity anomalies were obtained by establishing normally distributed random error N ( u , σ 2 ) with varying mean u and noise variance σ 2 . Thereafter, the matching location of IGNS was simulated. The results show that the changes in u had little effect on the location accuracy. However, an increase in σ 2 resulted in a significant decrease in the location accuracy. A comparison between the actual ocean experiment and the simulation along the same route demonstrated the effectiveness of the proposed simulation method and quantitative analysis results. In addition, given the gravimeter (1-2 mGal accuracy) and the reference map (resolution 1' × 1'; accuracy 3-8 mGal), location accuracy of IGNS was up to reach ~1.0-3.0 n miles in the South China Sea.
Automatic Near-Real-Time Image Processing Chain for Very High Resolution Optical Satellite Data
NASA Astrophysics Data System (ADS)
Ostir, K.; Cotar, K.; Marsetic, A.; Pehani, P.; Perse, M.; Zaksek, K.; Zaletelj, J.; Rodic, T.
2015-04-01
In response to the increasing need for automatic and fast satellite image processing SPACE-SI has developed and implemented a fully automatic image processing chain STORM that performs all processing steps from sensor-corrected optical images (level 1) to web-delivered map-ready images and products without operator's intervention. Initial development was tailored to high resolution RapidEye images, and all crucial and most challenging parts of the planned full processing chain were developed: module for automatic image orthorectification based on a physical sensor model and supported by the algorithm for automatic detection of ground control points (GCPs); atmospheric correction module, topographic corrections module that combines physical approach with Minnaert method and utilizing anisotropic illumination model; and modules for high level products generation. Various parts of the chain were implemented also for WorldView-2, THEOS, Pleiades, SPOT 6, Landsat 5-8, and PROBA-V. Support of full-frame sensor currently in development by SPACE-SI is in plan. The proposed paper focuses on the adaptation of the STORM processing chain to very high resolution multispectral images. The development concentrated on the sub-module for automatic detection of GCPs. The initially implemented two-step algorithm that worked only with rasterized vector roads and delivered GCPs with sub-pixel accuracy for the RapidEye images, was improved with the introduction of a third step: super-fine positioning of each GCP based on a reference raster chip. The added step exploits the high spatial resolution of the reference raster to improve the final matching results and to achieve pixel accuracy also on very high resolution optical satellite data.
NASA Astrophysics Data System (ADS)
Hasözbek, Altug; Mathew, Kattathu; Wegener, Michael
2013-04-01
The total evaporation (TE) is a well-established analytical method for safeguards measurement of uranium and plutonium isotope-amount ratios using the thermal ionization mass spectrometry (TIMS). High accuracy and precision isotopic measurements find many applications in nuclear safeguards, for e.g. assay measurements using isotope dilution mass spectrometry. To achieve high accuracy and precision in TIMS measurements, mass dependent fractionation effects are minimized by either the measurement technique or changes in the hardware components that are used to control sample heating and evaporation process. At NBL, direct total evaporation (DTE) method on the modified MAT261 instrument, uses the data system to read the ion signal intensity and its difference from a pre-determined target intensity, is used to control the incremental step at which the evaporation filament is heated. The feedback and control is achieved by proprietary hardware from SPECTROMAT that uses an analog regulator in the filament power supply with direct feedback of the detector intensity. Compared to traditional TE method on this instrument, DTE provides better precision (relative standard deviation, expressed as a percent) and accuracy (relative difference, expressed as a percent) of 0.05 to 0.08 % for low enriched and high enriched NBL uranium certified reference materials.
Emami, Mohammad Hasan; Ataie-Khorasgani, Masoud; Jafari-Pozve, Nasim
2017-01-01
Early upper gastrointestinal (UGI) cancer detection had led to organ-preserving endoscopic therapy. Endoscopy is a suitable method of early diagnosis of UGI malignancies. In Iran, exclusion of malignancy is the most important indication for endoscopy. This study is designed to see whether using alarm symptoms can predict the risk of cancer in patients. A total of 3414 patients referred to a tertiary gastrointestinal (GI) clinic in Isfahan, Iran, from 2009 to 2016 with dyspepsia, gastroesophageal reflux disease (GERD), and alarm symptoms, such as weight loss, dysphagia, GI bleeding, vomiting, positive familial history for cancer, and anorexia. Each patient had been underwent UGI endoscopy and patient data, including histology results, had been collected in the computer. We used logistic regression models to estimate the diagnostic accuracy of each alarm symptoms. A total of 3414 patients with alarm symptoms entered in this study, of whom 72 (2.1%) had an UGI malignancy. According to the logistic regression model, dysphagia ( P < 0.001) and weight loss ( P < 0.001) were found to be significant positive predictive factors for malignancy. Furthermore, males were in a significantly higher risk of developing UGI malignancy. Through receiver operating characteristic curve and the area under the curve (AUC) with adequate overall calibration and model fit measures, dysphagia and weight loss as a related cancer predictor had a high diagnostic accuracy (accuracy = 0. 72, AUC = 0. 881). Using a combination of age, alarm symptoms will lead to high positive predictive value for cancer. We recommend to do an early endoscopy for any patient with UGI symptoms and to take multiple biopsies from any rudeness or suspicious lesion, especially for male gender older than 50, dysphagia, or weight loss.
Low-power low-voltage superior-order curvature corrected voltage reference
NASA Astrophysics Data System (ADS)
Popa, Cosmin
2010-06-01
A complementary metal oxide semiconductor (CMOS) voltage reference with a logarithmic curvature-correction will be presented. The first-order compensation is realised using an original offset voltage follower (OVF) block as a proportional to absolute temperature (PTAT) voltage generator, with the advantages of reducing the silicon area and of increasing accuracy by replacing matched resistors with matched transistors. The new logarithmic curvature-correction technique will be implemented using an asymmetric differential amplifier (ADA) block for compensating the logarithmic temperature dependent term from the first-order compensated voltage reference. In order to increase the circuit accuracy, an original temperature-dependent current generator will be designed for computing the exact type of the implemented curvature-correction. The relatively small complexity of the current squarer allows an important increasing of the circuit accuracy that could be achieved by increasing the current generator complexity. As a result of operating most of the MOS transistors in weak inversion, the original proposed voltage reference could be valuable for low-power applications. The circuit is implemented in 0.35 μm CMOS technology and consumes only 60μA for t = 25°C, being supplied at the minimal supply voltage V DD = 1.75V. The temperature coefficient of the reference voltage is 8.7 ppm/°C, while the line sensitivity is 0.75 mV/V for a supply voltage between 1.75 V and 7 V.
Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa
2017-09-01
Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.
NASA Astrophysics Data System (ADS)
Vuković, Josip; Kos, Tomislav
2017-10-01
The ionosphere introduces positioning error in Global Navigation Satellite Systems (GNSS). There are several approaches for minimizing the error, with various levels of accuracy and different extents of coverage area. To model the state of the ionosphere in a region containing low number of reference GNSS stations, a locally adapted NeQuick 2 model can be used. Data ingestion updates the model with local level of ionization, enabling it to follow the observed changes of ionization levels. The NeQuick 2 model was adapted to local reference Total Electron Content (TEC) data using single station approach and evaluated using calibrated TEC data derived from 41 testing GNSS stations distributed around the data ingestion point. Its performance was observed in European middle latitudes in different ionospheric conditions of the period between 2011 and 2015. The modelling accuracy was evaluated in four azimuthal quadrants, with coverage radii calculated for three error thresholds: 12, 6 and 3 TEC Units (TECU). Diurnal radii change was observed for groups of days within periods of low and high solar activity and different seasons of the year. The statistical analysis was conducted on those groups of days, revealing trends in each of the groups, similarities between days within groups and the 95th percentile radii as a practically applicable measure of model performance. In almost all cases the modelling accuracy was better than 12 TECU, having the biggest radius from the data ingestion point. Modelling accuracy better than 6 TECU was achieved within reduced radius in all observed periods, while accuracy better than 3 TECU was reached only in summer. The calculated radii and interpolated error levels were presented on maps. That was especially useful in analyzing the model performance during the strongest geomagnetic storms of the observed period, with each of them having unique development and influence on model accuracy. Although some of the storms severely degraded the model accuracy, during most of the disturbed periods the model could be used, but with lower accuracy than in the quiet geomagnetic conditions. The comprehensive analysis of locally adapted NeQuick 2 model performance highlighted the challenges of using the single point data ingestion applied to a large region in middle latitudes and determined the achievable radii for different error thresholds in various ionospheric conditions.
Development of the One Centimeter Accuracy Geoid Model of Latvia for GNSS Measurements
NASA Astrophysics Data System (ADS)
Balodis, J.; Silabriedis, G.; Haritonova, D.; Kaļinka, M.; Janpaule, I.; Morozova, K.; Jumāre, I.; Mitrofanovs, I.; Zvirgzds, J.; Kaminskis, J.; Liepiņš, I.
2015-11-01
There is an urgent necessity for a highly accurate and reliable geoid model to enable prompt determination of normal height with the use of GNSS coordinate determination due to the high precision requirements in geodesy, building and high precision road construction development. Additionally, the Latvian height system is in the process of transition from BAS- 77 (Baltic Height System) to EVRS2007 system. The accuracy of the geoid model must approach the precision of about ∼1 cm looking forward to the Baltic Rail and other big projects. The use of all the available and verified data sources is planned, including the use of enlarged set of GNSS/levelling data, gravimetric measurement data and, additionally, the vertical deflection measurements over the territory of Latvia. The work is going ahead stepwise. Just the issue of GNSS reference network stability is discussed. In order to achieve the ∼1 cm precision geoid, it is required to have a homogeneous high precision GNSS network as a basis for ellipsoidal height determination for GNSS/levelling points. Both the LatPos and EUPOS® - Riga network have been examined in this article.
NASA Astrophysics Data System (ADS)
Layne, G. D.
2009-12-01
Today, many areas of geochemical research utilize microanalytical determinations of trace elements in carbonate minerals. In particular, there has been an explosion in the application of Secondary Ion Mass Spectrometry (SIMS) to studies of marine biomineralization. SIMS provides highly precise determinations of Mg and Sr at the concentration levels normally encountered in corals, mollusks or fish otoliths. It is also a highly effective means for determining a wide range of other trace elements at ppm levels (e.g., Na, Fe, Mn, Ba, REE, Pb, Th, and U) in a variety of naturally occurring calcite and aragonite matrices - and so is potentially valuable in studies of diagenesis, hydrothermal fluids and carbonatitic magmas. For SIMS, modest time per spot (often <5 min), lateral spatial resolution (<10 μm), sample volume consumption (<10 ng) and overall reproducibility compare extremely favorably with other microanalytical techniques for these applications. However, accuracy and reproducibility are currently wholly limited by the homogeneity of available solid reference material - which is far inferior to the tenths of a percent levels of precision achieved by SIMS. Due to variation in the sputtered ion yields of most elements with the major element composition of the sample matrix, accuracy of SIMS depends intimately on matrix-matched solid reference materials. Despite its rapidly increasing use for trace element analyses of carbonates, there remains a dearth of certified reference materials suitable for calibrating SIMS. The pressed powders used by some analysts to calibrate LA-ICP-MS do not perform well for SIMS - they are not perfectly dense or homogeneous to the desired level at the micron scale of sampling. Further, they often prove incompatible with the sample high vacuum compatibility requirement for stable SIMS analysis (10-8 to 10-9 torr). Some naturally occurring calcite has apparent utility as a reference material. For example, equigranular calcite from some zones of carbonatite intrusions (sovites) and recrystallized calcites from highly metamorphosed metallic ore deposits. Most calcite marbles, though possibly appropriate as Sr standards, show substantial inhomogeneity in Mg, Mn and Ba. Some hydrothermal “Iceland Spar” calcite may prove useful as a reference for extremely low concentrations of Mg, Sr and Ba. The best carbonatitic calcites currently in use appear homogeneous to better than 2-3% for Sr (and somewhat less homogeneous for Mg). But these standards still require numerous replicate analyses during analytical sessions to reduce the overall uncertainty to <<1.0%.The availability of appropriate certified solid reference materials with a high degree of homogeneity would greatly benefit the utilization and inter-comparison of SIMS determinations in carbonates, while substantially reducing the time consumed in calibration. Some studies would also benefit from the extension of this effort to the characterization of appropriate standards of other rhombohedral carbonates (especially dolomite and Fe-rich calcite).
Frampton, Geoff K; Kalita, Neelam; Payne, Liz; Colquitt, Jill; Loveman, Emma
2016-04-01
Natural fluorescence in the eye may be increased or decreased by diseases that affect the retina. Imaging methods based on confocal scanning laser ophthalmoscopy (cSLO) can detect this 'fundus autofluorescence' (FAF) by illuminating the retina using a specific light 'excitation wavelength'. FAF imaging could assist the diagnosis or monitoring of retinal conditions. However, the accuracy of the method for diagnosis or monitoring is unclear. To conduct a systematic review to determine the accuracy of FAF imaging using cSLO for the diagnosis or monitoring of retinal conditions, including monitoring of response to therapy. Electronic bibliographic databases; scrutiny of reference lists of included studies and relevant systematic reviews; and searches of internet pages of relevant organisations, meetings and trial registries. Databases included MEDLINE, EMBASE, The Cochrane Library, Web of Science and the Medion database of diagnostic accuracy studies. Searches covered 1990 to November 2014 and were limited to the English language. References were screened for relevance using prespecified inclusion criteria to capture a broad range of retinal conditions. Two reviewers assessed titles and abstracts independently. Full-text versions of relevant records were retrieved and screened by one reviewer and checked by a second. Data were extracted and critically appraised using the Quality Assessment of Diagnostic Accuracy Studies criteria (QUADAS) for assessing risk of bias in test accuracy studies by one reviewer and checked by a second. At all stages any reviewer disagreement was resolved through discussion or arbitration by a third reviewer. Eight primary research studies have investigated the diagnostic accuracy of FAF imaging in retinal conditions: choroidal neovascularisation (one study), reticular pseudodrusen (three studies), cystoid macular oedema (two studies) and diabetic macular oedema (two studies). Sensitivity of FAF imaging using an excitation wavelength of 488 nm was generally high (range 81-100%), but was lower (55% and 32%) in two studies using longer excitation wavelengths (514 nm and 790 nm, respectively). Specificity ranged from 34% to 100%. However, owing to limitations of the data, none of the studies provide conclusive evidence of the diagnostic accuracy of FAF imaging. No studies on the accuracy of FAF imaging for monitoring the progression of retinal conditions or response to therapy were identified. Owing to study heterogeneity, pooling of diagnostic outcomes in meta-analysis was not conducted. All included studies had high risk of bias. In most studies the patient spectrum was not reflective of those who would present in clinical practice and no studies adequately reported how FAF images were interpreted. Although already in use in clinical practice, it is unclear whether or not FAF imaging is accurate, and whether or not it is applied and interpreted consistently for the diagnosis and/or monitoring of retinal conditions. Well-designed prospective primary research studies, which conform to the paradigm of diagnostic test accuracy assessment, are required to investigate the accuracy of FAF imaging in diagnosis and monitoring of inherited retinal dystrophies, early age-related macular degeneration, geographic atrophy and central serous chorioretinopathy. This study is registered as PROSPERO CRD42014014997. The National Institute for Health Research Health Technology Assessment programme.
Fast and accurate genotype imputation in genome-wide association studies through pre-phasing
Howie, Bryan; Fuchsberger, Christian; Stephens, Matthew; Marchini, Jonathan; Abecasis, Gonçalo R.
2013-01-01
Sequencing efforts, including the 1000 Genomes Project and disease-specific efforts, are producing large collections of haplotypes that can be used for genotype imputation in genome-wide association studies (GWAS). Imputing from these reference panels can help identify new risk alleles, but the use of large panels with existing methods imposes a high computational burden. To keep imputation broadly accessible, we introduce a strategy called “pre-phasing” that maintains the accuracy of leading methods while cutting computational costs by orders of magnitude. In brief, we first statistically estimate the haplotypes for each GWAS individual (“pre-phasing”) and then impute missing genotypes into these estimated haplotypes. This reduces the computational cost because: (i) the GWAS samples must be phased only once, whereas standard methods would implicitly re-phase with each reference panel update; (ii) it is much faster to match a phased GWAS haplotype to one reference haplotype than to match unphased GWAS genotypes to a pair of reference haplotypes. This strategy will be particularly valuable for repeated imputation as reference panels evolve. PMID:22820512
NASA Astrophysics Data System (ADS)
Hemenway, Paul
1991-07-01
Determination of a non-rotating Reference Frame is crucial to progress in many areas, including: Galactic motions, local (Oort's A and B) and global (R0) parameters derived from them, solar system motion discrepancies (Planet X); and in conjunction with the VLBI radio reference frame, the registration of radio and optical images at an accuracy well below the resolution limit of HST images (0.06 arcsec). The goal of the Program is to tie the HIPPARCOS and Extra- galactic Reference Frames together at the 0.0005 arcsec and 0.0005 arcsec/year level. The HST data will allow a deter- mination of the brightness distribution in the stellar and extragalactic objects observed and time dependent changes therein at the 0.001 arcsec/year level. The Program requires targets distributed over the whole sky to define a rigid Reference Frame. GTO observations will provide initial first epoch data and preliminary proper motions. The observations will consist of relative positions of Extra- galactic objects (EGOs) and HIPPARCOS stars, measured with the FGSs.
Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe
2012-11-20
The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.
Artificial intelligence techniques for automatic screening of amblyogenic factors.
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.
Zayed, M A; El-Rasheedy, El-Gazy A
2012-03-01
Two simple, sensitive, cheep and reliable spectrophotometric methods are suggested for micro-determination of pseudoephedrine in its pure form and in pharmaceutical preparation (Sinofree Tablets). The first one depends on the drug reaction with inorganic sensitive reagent like molybdate anion in aqueous media via formation of ion-pair mechanism. The second one depends on the drug reaction with π-acceptor reagent like DDQ in non-aqueous media via formation of charge transfer complex. These reactions were studied under various conditions and the optimum parameters were selected. Under proper conditions the suggested procedures were successfully applied for micro-determination of pseudoephedrine in pure and in Sinofree Tablets without interference from excepients. The values of SD, RSD, recovery %, LOD, LOQ and Sandell sensitivity refer to the high accuracy and precession of the applied procedures. The results obtained were compared with the data obtained by an official method, referring to confidence and agreement with DDQ procedure results; but it referred to the more accuracy of the molybdate data. Therefore, the suggested procedures are now successfully being applied in routine analysis of this drug in its pharmaceutical formulation (Sinofree) in Saudi Arabian Pharmaceutical Company (SPIMACO) in Boridah El-Qaseem, Saudi Arabia instead of imported kits had been previously used. Copyright © 2011 Elsevier B.V. All rights reserved.
Reference List Accuracy in Social Work Journals: A Follow-Up Analysis
ERIC Educational Resources Information Center
Mitchell-Williams, Missy T.; Skipper, Antonius D.; Alexander, Marvin C.; Wilks, Scott E.
2017-01-01
Purpose: Following up an "Research on Social Work Practice" article published a decade ago, this study aimed to examine reference error rates among five, widely circulated social work journals. Methods: A stratified random sample of references was selected from the year 2013 (N = 500, 100/journal). Each was verified against the original…
Intellectual Functioning and Aging: A Selected Bibliography. Technical Bibliographies on Aging.
ERIC Educational Resources Information Center
Schaie, K. Warner; Zelinski, Elizabeth M.
The selected bibliography contains about 400 references taken from a keysort file of more than 45,000 references, compiled from commercially available data bases and published sources, relevant to gerontology. Those of questionable accuracy were checked or deleted during the verification process. Most references are in English and were selected…
2018-01-01
Profile Database E-17 Attachment 2: NRMM Data Input Requirements E-25 Attachment 3: General Physics -Based Model Data Input Requirements E-28...E-15 Figure E-11 Examples of Unique Surface Types E-20 Figure E-12 Correlating Physical Testing with Simulation E-21 Figure E-13 Simplified Tire...Table 10-8 Scoring Values 10-19 Table 10-9 Accuracy – Physics -Based 10-20 Table 10-10 Accuracy – Validation Through Measurement 10-22 Table 10-11
Relativistic theory for picosecond time transfer in the vicinity of Earth
NASA Technical Reports Server (NTRS)
Petit, G.; Wolf, P.
1994-01-01
The problem of light propagation is treated in a geocentric reference system with the goal of ensuring picosecond accuracy for time transfer techniques using electromagnetic signals in the vicinity of the Earth. We give an explicit formula for a one way time transfer, to be applied when the spatial coordinates of the time transfer stations are known in a geocentric reference system rotating with the Earth. This expression is extended, at the same accuracy level of one picosecond, to the special cases of two way and LASSO time transfers via geostationary satellites.
Remans, Tony; Keunen, Els; Bex, Geert Jan; Smeets, Karen; Vangronsveld, Jaco; Cuypers, Ann
2014-10-01
Reverse transcription-quantitative PCR (RT-qPCR) has been widely adopted to measure differences in mRNA levels; however, biological and technical variation strongly affects the accuracy of the reported differences. RT-qPCR specialists have warned that, unless researchers minimize this variability, they may report inaccurate differences and draw incorrect biological conclusions. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines describe procedures for conducting and reporting RT-qPCR experiments. The MIQE guidelines enable others to judge the reliability of reported results; however, a recent literature survey found low adherence to these guidelines. Additionally, even experiments that use appropriate procedures remain subject to individual variation that statistical methods cannot correct. For example, since ideal reference genes do not exist, the widely used method of normalizing RT-qPCR data to reference genes generates background noise that affects the accuracy of measured changes in mRNA levels. However, current RT-qPCR data reporting styles ignore this source of variation. In this commentary, we direct researchers to appropriate procedures, outline a method to present the remaining uncertainty in data accuracy, and propose an intuitive way to select reference genes to minimize uncertainty. Reporting the uncertainty in data accuracy also serves for quality assessment, enabling researchers and peer reviewers to confidently evaluate the reliability of gene expression data. © 2014 American Society of Plant Biologists. All rights reserved.
Absolute and relative height-pixel accuracy of SRTM-GL1 over the South American Andean Plateau
NASA Astrophysics Data System (ADS)
Satge, Frédéric; Denezine, Matheus; Pillco, Ramiro; Timouk, Franck; Pinel, Sébastien; Molina, Jorge; Garnier, Jérémie; Seyler, Frédérique; Bonnet, Marie-Paule
2016-11-01
Previously available only over the Continental United States (CONUS), the 1 arc-second mesh size (spatial resolution) SRTM-GL1 (Shuttle Radar Topographic Mission - Global 1) product has been freely available worldwide since November 2014. With a relatively small mesh size, this digital elevation model (DEM) provides valuable topographic information over remote regions. SRTM-GL1 is assessed for the first time over the South American Andean Plateau in terms of both the absolute and relative vertical point-to-point accuracies at the regional scale and for different slope classes. For comparison, SRTM-v4 and GDEM-v2 Global DEM version 2 (GDEM-v2) generated by ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) are also considered. A total of approximately 160,000 ICESat/GLAS (Ice, Cloud and Land Elevation Satellite/Geoscience Laser Altimeter System) data are used as ground reference measurements. Relative error is often neglected in DEM assessments due to the lack of reference data. A new methodology is proposed to assess the relative accuracies of SRTM-GL1, SRTM-v4 and GDEM-v2 based on a comparison with ICESat/GLAS measurements. Slope values derived from DEMs and ICESat/GLAS measurements from approximately 265,000 ICESat/GLAS point pairs are compared using quantitative and categorical statistical analysis introducing a new index: the False Slope Ratio (FSR). Additionally, a reference hydrological network is derived from Google Earth and compared with river networks derived from the DEMs to assess each DEM's potential for hydrological applications over the region. In terms of the absolute vertical accuracy on a global scale, GDEM-v2 is the most accurate DEM, while SRTM-GL1 is more accurate than SRTM-v4. However, a simple bias correction makes SRTM-GL1 the most accurate DEM over the region in terms of vertical accuracy. The relative accuracy results generally did not corroborate the absolute vertical accuracy. GDEM-v2 presents the lowest statistical results based on the relative accuracy, while SRTM-GL1 is the most accurate. Vertical accuracy and relative accuracy are two independent components that must be jointly considered when assessing a DEM's potential. DEM accuracies increased with slope. In terms of hydrological potential, SRTM products are more accurate than GDEM-v2. However, the DEMs exhibit river extraction limitations over the region due to the low regional slope gradient.
Chaswal, Vibha; Weldon, Michael; Gupta, Nilendu; Chakravarti, Arnab
2014-01-01
We present commissioning and comprehensive evaluation for ArcCHECK as a QA equipment for volumetric‐modulated arc therapy (VMAT), using the 6 MV photon beam with and without the flattening filter, and the SNC patient software (version 6.2). In addition to commissioning involving absolute dose calibration, array calibration, and PMMA density verification, ArcCHECK was evaluated for its response dependency on linac dose rate, instantaneous dose rate, radiation field size, beam angle, and couch insertion. Scatter dose characterization, consistency and symmetry of response, and dosimetry accuracy evaluation for fixed aperture arcs and clinical VMAT patient plans were also investigated. All the evaluation tests were performed with the central plug inserted and the homogeneous PMMA density value. Results of gamma analysis demonstrated an overall agreement between ArcCHECK‐measured and TPS‐calculated reference doses. The diode based field size dependency was found to be within 0.5% of the reference. The dose rate‐based dependency was well within 1% of the TPS reference, and the angular dependency was found to be ± 3% of the reference, as tested for BEV angles, for both beams. Dosimetry of fixed arcs, using both narrow and wide field widths, resulted in clinically acceptable global gamma passing rates on the 3%/3 mm level and 10% threshold. Dosimetry of narrow arcs showed an improvement over published literature. The clinical VMAT cases demonstrated high level of dosimetry accuracy in gamma passing rates. PACS numbers: 87.56.Fc, 87.55.kh, 87.55.Qr PMID:25207411
Kang, Geraldine H.; Cruite, Irene; Shiehmorteza, Masoud; Wolfson, Tanya; Gamst, Anthony C.; Hamilton, Gavin; Bydder, Mark; Middleton, Michael S.; Sirlin, Claude B.
2016-01-01
Purpose To evaluate magnetic resonance imaging (MRI)-determined proton density fat fraction (PDFF) reproducibility across two MR scanner platforms and, using MR spectroscopy (MRS)-determined PDFF as reference standard, to confirm MRI-determined PDFF estimation accuracy. Materials and Methods This prospective, cross-sectional, crossover, observational pilot study was approved by an Institutional Review Board. Twenty-one subjects gave written informed consent and underwent liver MRI and MRS at both 1.5T (Siemens Symphony scanner) and 3T (GE Signa Excite HD scanner). MRI-determined PDFF was estimated using an axial 2D spoiled gradient-recalled echo sequence with low flip-angle to minimize T1 bias and six echo-times to permit correction of T2* and fat-water signal interference effects. MRS-determined PDFF was estimated using a stimulated-echo acquisition mode sequence with long repetition time to minimize T1 bias and five echo times to permit T2 correction. Interscanner reproducibility of MRI determined PDFF was assessed by correlation analysis; accuracy was assessed separately at each field strength by linear regression analysis using MRS-determined PDFF as reference standard. Results 1.5T and 3T MRI-determined PDFF estimates were highly correlated (r = 0.992). MRI-determined PDFF estimates were accurate at both 1.5T (regression slope/intercept = 0.958/−0.48) and 3T (slope/intercept = 1.020/0.925) against the MRS-determined PDFF reference. Conclusion MRI-determined PDFF estimation is reproducible and, using MRS-determined PDFF as reference standard, accurate across two MR scanner platforms at 1.5T and 3T. PMID:21769986
Kang, Geraldine H; Cruite, Irene; Shiehmorteza, Masoud; Wolfson, Tanya; Gamst, Anthony C; Hamilton, Gavin; Bydder, Mark; Middleton, Michael S; Sirlin, Claude B
2011-10-01
To evaluate magnetic resonance imaging (MRI)-determined proton density fat fraction (PDFF) reproducibility across two MR scanner platforms and, using MR spectroscopy (MRS)-determined PDFF as reference standard, to confirm MRI-determined PDFF estimation accuracy. This prospective, cross-sectional, crossover, observational pilot study was approved by an Institutional Review Board. Twenty-one subjects gave written informed consent and underwent liver MRI and MRS at both 1.5T (Siemens Symphony scanner) and 3T (GE Signa Excite HD scanner). MRI-determined PDFF was estimated using an axial 2D spoiled gradient-recalled echo sequence with low flip-angle to minimize T1 bias and six echo-times to permit correction of T2* and fat-water signal interference effects. MRS-determined PDFF was estimated using a stimulated-echo acquisition mode sequence with long repetition time to minimize T1 bias and five echo times to permit T2 correction. Interscanner reproducibility of MRI determined PDFF was assessed by correlation analysis; accuracy was assessed separately at each field strength by linear regression analysis using MRS-determined PDFF as reference standard. 1.5T and 3T MRI-determined PDFF estimates were highly correlated (r = 0.992). MRI-determined PDFF estimates were accurate at both 1.5T (regression slope/intercept = 0.958/-0.48) and 3T (slope/intercept = 1.020/0.925) against the MRS-determined PDFF reference. MRI-determined PDFF estimation is reproducible and, using MRS-determined PDFF as reference standard, accurate across two MR scanner platforms at 1.5T and 3T. Copyright © 2011 Wiley-Liss, Inc.
Demura, Shinichi; Sato, Susumu; Nakada, Masakatsu; Minami, Masaki; Kitabayashi, Tamotsu
2003-07-01
This study compared the accuracy of body density (Db) estimation methods using hydrostatic weighing without complete head submersion (HW(withoutHS)) of Donnelly et al. (1988) and Donnelly and Sintek (1984) as referenced to Goldman and Buskirk's approach (1961). Donnelly et al.'s method estimates Db from a regression equation using HW(withoutHS), moreover, Donnelly and Sintek's method estimates it from HW(withoutHS) and head anthropometric variables. Fifteen Japanese males (173.8+/-4.5 cm, 63.6+/-5.4 kg, 21.2+/-2.8 years) and fifteen females (161.4+/-5.4 cm, 53.8+/-4.8 kg, 21.0+/-1.4 years) participated in this study. All the subjects were measured for head length, width and HWs under the two conditions of with and without head submersion. In order to examine the consistency of estimation values of Db, the correlation coefficients between the estimation values and the reference (Goldman and Buskirk, 1961) were calculated. The standard errors of estimation (SEE) were calculated by regression analysis using a reference value as a dependent variable and estimation values as independent variables. In addition, the systematic errors of two estimation methods were investigated by the Bland-Altman technique (Bland and Altman, 1986). In the estimation, Donnelly and Sintek's equation showed a high relationship with the reference (r=0.960, p<0.01), but had more differences from the reference compared with Donnelly et al.'s equation. Further studies are needed to develop new prediction equations for Japanese considering sex and individual differences in head anthropometry.
Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L
2005-10-01
To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.
Nedelcu, Robert; Olsson, Pontus; Nyström, Ingela; Thor, Andreas
2018-02-23
Several studies have evaluated accuracy of intraoral scanners (IOS), but data is lacking regarding variations between IOS systems in the depiction of the critical finish line and the finish line accuracy. The aim of this study was to analyze the level of finish line distinctness (FLD), and finish line accuracy (FLA), in 7 intraoral scanners (IOS) and one conventional impression (IMPR). Furthermore, to assess parameters of resolution, tessellation, topography, and color. A dental model with a crown preparation including supra and subgingival finish line was reference-scanned with an industrial scanner (ATOS), and scanned with seven IOS: 3M, CS3500 and CS3600, DWIO, Omnicam, Planscan and Trios. An IMPR was taken and poured, and the model was scanned with a laboratory scanner. The ATOS scan was cropped at finish line and best-fit aligned for 3D Compare Analysis (Geomagic). Accuracy was visualized, and descriptive analysis was performed. All IOS, except Planscan, had comparable overall accuracy, however, FLD and FLA varied substantially. Trios presented the highest FLD, and with CS3600, the highest FLA. 3M, and DWIO had low overall FLD and low FLA in subgingival areas, whilst Planscan had overall low FLD and FLA, as well as lower general accuracy. IMPR presented high FLD, except in subgingival areas, and high FLA. Trios had the highest resolution by factor 1.6 to 3.1 among IOS, followed by IMPR, DWIO, Omnicam, CS3500, 3M, CS3600 and Planscan. Tessellation was found to be non-uniform except in 3M and DWIO. Topographic variation was found for 3M and Trios, with deviations below +/- 25 μm for Trios. Inclusion of color enhanced the identification of the finish line in Trios, Omnicam and CS3600, but not in Planscan. There were sizeable variations between IOS with both higher and lower FLD and FLA than IMPR. High FLD was more related to high localized finish line resolution and non-uniform tessellation, than to high overall resolution. Topography variations were low. Color improved finish line identification in some IOS. It is imperative that clinicians critically evaluate the digital impression, being aware of varying technical limitations among IOS, in particular when challenging subgingival conditions apply.
NChina16: A stable geodetic reference frame for geological hazard studies in north China
NASA Astrophysics Data System (ADS)
Wang, G.; Yan, B.; Gan, W.; Geng, J.
2017-12-01
This study established a stable North China Reference Frame 2016 (NChina16) using five years of continuous GPS observations (2011.8 to 2016.8) from 12 continuously operating reference stations (CORS) fixed to the stable interior of the North China Craton. Applications of NChina16 in landslide, subsidence, and post-seismic displacement studies are illustrated. The primary result of this study is the seven parameters for transforming Cartesian ECEF (Earth-Centered, Earth-Fixed) coordinates X, Y, and Z from the International GNSS Service Reference Frame 2008 (IGS08) to NChina16. The seven parameters include the epoch that is used to tie the regional reference frame to IGS08 and the time derivatives of three translations and three rotations. A method for developing a regional geodetic reference frame is introduced in detail. The GIPSY-OASIS (V6.4) software package was used to obtain the precise point positioning (PPP) time series with respect to IGS08. The stability (accuracy) of NChina16 is about 0.5 mm/year in both vertical and horizontal directions. This study also developed a regional seasonal model for correcting vertical displacement time series data derived from the PPP solutions. Long-term GPS observations (1999-2016) from five CORS in north China were used to develop the seasonal model. According to this study, the PPP daily solutions with respect to NChina16 could achieve 2-3 mm horizontal accuracy and 4-5 mm vertical accuracy after being modified by the regional model. NChina16 will be critical to the long-term landslide, subsidence, fault, and structural monitoring in north China and for ongoing post-seismic crustal deformation studies in Japan. NChina16 will be incrementally improved and synchronized with the IGS reference frame update.
Fuller, Douglas O; Parenti, Michael S; Gad, Adel M; Beier, John C
2012-01-01
Irrigation along the Nile River has resulted in dramatic changes in the biophysical environment of Upper Egypt. In this study we used a combination of MODIS 250 m NDVI data and Landsat imagery to identify areas that changed from 2001-2008 as a result of irrigation and water-level fluctuations in the Nile River and nearby water bodies. We used two different methods of time series analysis -- principal components (PCA) and harmonic decomposition (HD), applied to the MODIS 250 m NDVI images to derive simple three-class land cover maps and then assessed their accuracy using a set of reference polygons derived from 30 m Landsat 5 and 7 imagery. We analyzed our MODIS 250 m maps against a new MODIS global land cover product (MOD12Q1 collection 5) to assess whether regionally specific mapping approaches are superior to a standard global product. Results showed that the accuracy of the PCA-based product was greater than the accuracy of either the HD or MOD12Q1 products for the years 2001, 2003, and 2008. However, the accuracy of the PCA product was only slightly better than the MOD12Q1 for 2001 and 2003. Overall, the results suggest that our PCA-based approach produces a high level of user and producer accuracies, although the MOD12Q1 product also showed consistently high accuracy. Overlay of 2001-2008 PCA-based maps showed a net increase of 12 129 ha of irrigated vegetation, with the largest increase found from 2006-2008 around the Districts of Edfu and Kom Ombo. This result was unexpected in light of ambitious government plans to develop 336 000 ha of irrigated agriculture around the Toshka Lakes.
Diagnostic Accuracy of the Slump Test for Identifying Neuropathic Pain in the Lower Limb.
Urban, Lawrence M; MacNeil, Brian J
2015-08-01
Diagnostic accuracy study with nonconsecutive enrollment. To assess the diagnostic accuracy of the slump test for neuropathic pain (NeP) in those with low to moderate levels of chronic low back pain (LBP), and to determine whether accuracy of the slump test improves by adding anatomical or qualitative pain descriptors. Neuropathic pain has been linked with poor outcomes, likely due to inadequate diagnosis, which precludes treatment specific for NeP. Current diagnostic approaches are time consuming or lack accuracy. A convenience sample of 21 individuals with LBP, with or without radiating leg pain, was recruited. A standardized neurosensory examination was used to determine the reference diagnosis for NeP. Afterward, the slump test was administered to all participants. Reports of pain location and quality produced during the slump test were recorded. The neurosensory examination designated 11 of the 21 participants with LBP/sciatica as having NeP. The slump test displayed high sensitivity (0.91), moderate specificity (0.70), a positive likelihood ratio of 3.03, and a negative likelihood ratio of 0.13. Adding the criterion of pain below the knee significantly increased specificity to 1.00 (positive likelihood ratio = 11.9). Pain-quality descriptors did not improve diagnostic accuracy. The slump test was highly sensitive in identifying NeP within the study sample. Adding a pain-location criterion improved specificity. Combining the diagnostic outcomes was very effective in identifying all those without NeP and half of those with NeP. Limitations arising from the small and narrow spectrum of participants with LBP/sciatica sampled within the study prevent application of the findings to a wider population. Diagnosis, level 4-.
[Geographical distribution of the Serum creatinine reference values of healthy adults].
Wei, De-Zhi; Ge, Miao; Wang, Cong-Xia; Lin, Qian-Yi; Li, Meng-Jiao; Li, Peng
2016-11-20
To explore the relationship between serum creatinine (Scr) reference values in healthy adults and geographic factors and provide evidence for establishing Scr reference values in different regions. We collected 29 697 Scr reference values from healthy adults measured by 347 medical facilities from 23 provinces, 4 municipalities and 5 autonomous regions. We chose 23 geographical factors and analyzed their correlation with Scr reference values to identify the factors correlated significantly with Scr reference values. According to the Principal component analysis and Ridge regression analysis, two predictive models were constructed and the optimal model was chosen after comparison of the two model's fitting degree of predicted results and measured results. The distribution map of Scr reference values was drawn using the Kriging interpolation method. Seven geographic factors, including latitude, annual sunshine duration, annual average temperature, annual average relative humidity, annual precipitation, annual temperature range and topsoil (silt) cation exchange capacity were found to correlate significantly with Scr reference values. The overall distribution of Scr reference values featured a pattern that the values were high in the south and low in the north, varying consistently with the latitude change. The data of the geographic factors in a given region allows the prediction of the Scr values in healthy adults. Analysis of these geographical factors can facilitate the determination of the reference values specific to a region to improve the accuracy for clinical diagnoses.
Fananapazir, Ghaneh; Bashir, Mustafa R; Corwin, Michael T; Lamba, Ramit; Vu, Catherine T; Troppmann, Christoph
2017-03-01
To determine the accuracy of ferumoxytol-enhanced magnetic resonance angiography (MRA) in assessing the severity of transplant renal artery stenosis (TRAS), using digital subtraction angiography (DSA) as the reference standard. Our Institutional Review Board approved this retrospective, Health Insurance Portability and Accountability Act-compliant study. Thirty-three patients with documented clinical suspicion for TRAS (elevated serum creatinine, refractory hypertension, edema, and/or audible bruit) and/or concerning sonographic findings (elevated renal artery velocity and/or intraparenchymal parvus tardus waveforms) underwent a 1.5T MRA with ferumoxytol prior to DSA. All DSAs were independently reviewed by an interventional radiologist and served as the reference standard. The MRAs were reviewed by three readers who were blinded to the ultrasound and DSA findings for the presence and severity of TRAS. Sensitivity, specificity, and accuracy for identifying substantial stenoses (>50%) were determined. Intraclass correlation coefficients (ICCs) were calculated among readers. Mean differences between the percent stenosis from each MRA reader and DSA were calculated. On DSA, a total of 42 stenoses were identified in the 33 patients. The sensitivity, specificity, and accuracy of MRA in detecting substantial stenoses were 100%, 75-87.5%, and 95.2-97.6%, respectively, among the readers. There was excellent agreement among readers as to the percent stenosis (ICC = 0.82). MRA overestimated the degree of stenosis by 3.9-9.6% compared to DSA. Ferumoxytol-enhanced MRA provides high sensitivity, specificity, and accuracy for determining the severity of TRAS. Our results suggest that it can potentially be used as a noninvasive examination following ultrasound to reduce the number of unnecessary conventional angiograms. 3 J. Magn. Reson. Imaging 2017;45:779-785. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Technical Reports Server (NTRS)
Vessot, Robert F. C.
1989-01-01
Clocks have played a strong role in the development of general relativity. The concept of the proper clock is presently best realized by atomic clocks, whose development as precision instruments has evolved very rapidly in the last decades. To put a historical prospective on this progress since the year AD 1000, the time stability of various clocks expressed in terms of seconds of time error over one day of operation is shown. This stability of operation must not be confused with accuracy. Stability refers to the constancy of a clock operation as compared to that of some other clocks that serve as time references. Accuracy, on the other hand, is the ability to reproduce a previously defined frequency. The issues are outlined that must be considered when accuracy and stability of clocks and oscillators are studied. In general, the most widely used resonances result from the hyperfine interaction of the nuclear magnetic dipole moment and that of the outermost electron, which is characteristic of hydrogen and the alkali atoms. During the past decade hyperfine resonances of ions have also been used. The principal reason for both the accuracy and the stability of atomic clocks is the ability of obtaining very narrow hyperfine transition resonances by isolating the atom in some way so that only the applied stimulating microwave magnetic field is a significant source of perturbation. It is also important to make resonance transitions among hyperfine magnetic sublevels where separation is independent, at least to first order, of the magnetic field. In the case of ions stored in traps operating at high magnetic fields, one selects the trapping field to be consistent with a field-independent transition of the trapped atoms.
Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data
NASA Astrophysics Data System (ADS)
Wessel, Birgit; Huber, Martin; Wohlfart, Christian; Marschalk, Ursula; Kosmann, Detlev; Roth, Achim
2018-05-01
The primary goal of the German TanDEM-X mission is the generation of a highly accurate and global Digital Elevation Model (DEM) with global accuracies of at least 10 m absolute height error (linear 90% error). The global TanDEM-X DEM acquired with single-pass SAR interferometry was finished in September 2016. This paper provides a unique accuracy assessment of the final TanDEM-X global DEM using two different GPS point reference data sets, which are distributed across all continents, to fully characterize the absolute height error. Firstly, the absolute vertical accuracy is examined by about three million globally distributed kinematic GPS (KGPS) points derived from 19 KGPS tracks covering a total length of about 66,000 km. Secondly, a comparison is performed with more than 23,000 "GPS on Bench Marks" (GPS-on-BM) points provided by the US National Geodetic Survey (NGS) scattered across 14 different land cover types of the US National Land Cover Data base (NLCD). Both GPS comparisons prove an absolute vertical mean error of TanDEM-X DEM smaller than ±0.20 m, a Root Means Square Error (RMSE) smaller than 1.4 m and an excellent absolute 90% linear height error below 2 m. The RMSE values are sensitive to land cover types. For low vegetation the RMSE is ±1.1 m, whereas it is slightly higher for developed areas (±1.4 m) and for forests (±1.8 m). This validation confirms an outstanding absolute height error at 90% confidence level of the global TanDEM-X DEM outperforming the requirement by a factor of five. Due to its extensive and globally distributed reference data sets, this study is of considerable interests for scientific and commercial applications.
Mayoral, Víctor; Pérez-Hernández, Concepción; Muro, Inmaculada; Leal, Ana; Villoria, Jesús; Esquivias, Ana
2018-04-27
Based on the clear neuroanatomical delineation of many neuropathic pain (NP) symptoms, a simple tool for performing a short structured clinical encounter based on the IASP diagnostic criteria was developed to identify NP. This study evaluated its accuracy and usefulness. A case-control study was performed in 19 pain clinics within Spain. A pain clinician used the experimental screening tool (the index test, IT) to assign the descriptions of non-neuropathic (nNP), non-localized neuropathic (nLNP), and localized neuropathic (LNP) to the patients' pain conditions. The reference standard was a formal clinical diagnosis provided by another pain clinician. The accuracy of the IT was compared with that of the Douleur Neuropathique en 4 questions (DN4) and the Leeds Assessment of Neuropathic Signs and Symptoms (LANSS). Six-hundred and sixty-six patients were analyzed. There was a good agreement between the IT and the reference standard (kappa =0.722). The IT was accurate in distinguishing between LNP and nLNP (83.2% sensitivity, 88.2% specificity), between LNP and the other pain categories (nLNP + nNP) (80.0% sensitivity, 90.7% specificity), and between NP and nNP (95.5% sensitivity, 89.1% specificity). The accuracy in distinguishing between NP and nNP was comparable with that of the DN4 and the LANSS. The IT took a median of 10 min to complete. A novel instrument based on an operationalization of the IASP criteria can not only discern between LNP and nLNP, but also provide a high level of diagnostic certainty about the presence of NP after a short clinical encounter.
NASA Astrophysics Data System (ADS)
Innerkofler, J.; Pock, C.; Kirchengast, G.; Schwaerz, M.; Jaeggi, A.; Andres, Y.; Marquardt, C.; Hunt, D.; Schreiner, W. S.; Schwarz, J.
2017-12-01
Global Navigation Satellite System (GNSS) radio occultation (RO) is a highly valuable satellite remote sensing technique for atmospheric and climate sciences, including calibration and validation (cal/val) of passive sounding instruments such as radiometers. It is providing accurate and precise measurements in the troposphere and stratosphere regions with global coverage, long-term stability, and virtually all-weather capability since 2001. For fully exploiting the potential of RO data as a cal/val reference and climate data record, uncertainties attributed to the data need to be assessed. Here we focus on the atmospheric excess phase data, based on the raw occultation tracking and orbit data, and its integrated uncertainty estimation within the new Reference Occultation Processing System (rOPS) developed at the WEGC. These excess phases correspond to integrated refractivity, proportional to pressure/temperature and water vapor, and are therefore highly valuable reference data for thermodynamic cal/val of passive (radiometric) sounder data. In order to enable high accuracy of the excess phase profiles, accurate orbit positions and velocities as well as clock estimates of the GNSS transmitter satellites and RO receiver satellites are determined using the Bernese and Napeos orbit determination software packages. We find orbit uncertainty estimates of about 5 cm (position) / 0.05 mm/s (velocity) for daily orbits for the MetOp, GRACE, and CHAMP RO missions, and decreased uncertainty estimates near 20 cm (position) / 0.2 mm/s (velocity) for the COSMIC RO mission. The strict evaluation and quality control of the position, velocity, and clock accuracies of the daily LEO and GNSS orbits assure smallest achievable uncertainties in the excess phase data. We compared the excess phase profiles from WEGC against profiles from EUMETSAT and UCAR. Results show good agreement in line with the estimated uncertainties, with millimetric differences in the upper stratosphere and mesosphere and centimetric differences in the troposphere, where the excess phases amount to beyond 100 m. This underlines the potential for a new fundamental cal/val reference and climate data record based on atmospheric excess phases from RO, given their narrow uncertainty and independence from background data.
GNSS RTK-networks: The significance and issues to realize a recent reference coordinate system
NASA Astrophysics Data System (ADS)
Umnig, Elke; Möller, Gregor; Weber, Robert
2014-05-01
The upcoming release of the new global reference frame ITRF2013 will provide high accurate reference station positions and station velocities at the mm- and mm/year level, respectively. ITRF users benefit from this development in various ways. For example, this new frame allows for embedding high accurate GNSS baseline observations to an underlying reference of at least the same accuracy. Another advantage is that the IGS products are fully consistent with this frame and therefore all GNSS based zero-difference positioning results (Precise Point Positioning (PPP)) will be aligned to the ITRF2013. Unfortunately the transistion to a new frame (or just to a new epoch) implies also issues in particular for providers and users of real time positioning services. Thus providers have to perform arrangements, such as the readjustment of the reference station coordinates and the update of the transformation parameters from the homogenous GNSS coordinate frame into the national datum. Finally providers have to inform their clients appropriately about these changes and significant adjustments. Furthermore the aspect of the continental reference frame has to be considered: In Europe the use of the continental reference system/reference frame ETRS89/ETRF2000 is, due to cross-national guidelines, recommend by most national mapping authorities. Subsequently GNSS post-processing applications are degraded by the concurrent use of the reference systems and reference frames, to which terrestrial site coordinates and satellite coordinates are aligned. In this presentation we highlight all significant steps and hurdles which have to be jumped over when introducing a new reference frame from point of view of a typical regional RTK-reference station network provider. This network is located in Austria and parts of the neighbouring countries and consists of about 40 reference stations. Moreover, we discuss the significance of permanently monitoring the stability of the reference network sites and the determination of station velocities/rates for geodynamical investigations.
Estimation of Center of Mass Trajectory using Wearable Sensors during Golf Swing.
Najafi, Bijan; Lee-Eng, Jacqueline; Wrobel, James S; Goebel, Ruben
2015-06-01
This study suggests a wearable sensor technology to estimate center of mass (CoM) trajectory during a golf swing. Groups of 3, 4, and 18 participants were recruited, respectively, for the purpose of three validation studies. Study 1 examined the accuracy of the system to estimate a 3D body segment angle compared to a camera-based motion analyzer (Vicon®). Study 2 assessed the accuracy of three simplified CoM trajectory models. Finally, Study 3 assessed the accuracy of the proposed CoM model during multiple golf swings. A relatively high agreement was observed between wearable sensors and the reference (Vicon®) for angle measurement (r > 0.99, random error <1.2° (1.5%) for anterior-posterior; <0.9° (2%) for medial-lateral; and <3.6° (2.5%) for internal-external direction). The two-link model yielded a better agreement with the reference system compared to one-link model (r > 0.93 v. r = 0.52, respectively). On the same note, the proposed two-link model estimated CoM trajectory during golf swing with relatively good accuracy (r > 0.9, A-P random error <1cm (7.7%) and <2cm (10.4%) for M-L). The proposed system appears to accurately quantify the kinematics of CoM trajectory as a surrogate of dynamic postural control during an athlete's movement and its portability, makes it feasible to fit the competitive environment without restricting surface type. Key pointsThis study demonstrates that wearable technology based on inertial sensors are accurate to estimate center of mass trajectory in complex athletic task (e.g., golf swing)This study suggests that two-link model of human body provides optimum tradeoff between accuracy and minimum number of sensor module for estimation of center of mass trajectory in particular during fast movements.Wearable technologies based on inertial sensors are viable option for assessing dynamic postural control in complex task outside of gait laboratory and constraints of cameras, surface, and base of support.
Progress toward Brazilian cesium fountain second generation
NASA Astrophysics Data System (ADS)
Bueno, Caio; Rodriguez Salas, Andrés; Torres Müller, Stella; Bagnato, Vanderlei Salvador; Varela Magalhães, Daniel
2018-03-01
The operation of a Cesium fountain primary frequency standard is strongly influenced by the characteristics of two important subsystems. The first is a stable frequency reference and the second is the frequency-transfer system. A stable standard frequency reference is key factor for experiments that require high accuracy and precision. The frequency stability of this reference has a significant impact on the procedures for evaluating certain systematic biases in frequency standards. This paper presents the second generation of the Brazilian Cesium Fountain (Br-CsF) through the opto-mechanical assembly and vacuum chamber to trap atoms. We used a squared section glass profile to build the region where the atoms are trapped and colled by magneto-optical technique. The opto-mechanical system was reduced to increase stability and robustness. This newest Atomic Fountain is essential to contribute with time and frequency development in metrology systems.
On the use of multi-dimensional scaling and electromagnetic tracking in high dose rate brachytherapy
NASA Astrophysics Data System (ADS)
Götz, Th I.; Ermer, M.; Salas-González, D.; Kellermeier, M.; Strnad, V.; Bert, Ch; Hensel, B.; Tomé, A. M.; Lang, E. W.
2017-10-01
High dose rate brachytherapy affords a frequent reassurance of the precise dwell positions of the radiation source. The current investigation proposes a multi-dimensional scaling transformation of both data sets to estimate dwell positions without any external reference. Furthermore, the related distributions of dwell positions are characterized by uni—or bi—modal heavy—tailed distributions. The latter are well represented by α—stable distributions. The newly proposed data analysis provides dwell position deviations with high accuracy, and, furthermore, offers a convenient visualization of the actual shapes of the catheters which guide the radiation source during the treatment.
Braun, Tobias; Grüneberg, Christian; Thiel, Christian
2018-04-01
Routine screening for frailty could be used to timely identify older people with increased vulnerability und corresponding medical needs. The aim of this study was the translation and cross-cultural adaptation of the PRISMA-7 questionnaire, the FRAIL scale and the Groningen Frailty Indicator (GFI) into the German language as well as a preliminary analysis of the diagnostic test accuracy of these instruments used to screen for frailty. A diagnostic cross-sectional study was performed. The instrument translation into German followed a standardized process. Prefinal versions were clinically tested on older adults who gave structured in-depth feedback on the scales in order to compile a final revision of the German language scale versions. For the analysis of diagnostic test accuracy (criterion validity), PRISMA-7, FRAIL scale and GFI were considered the index tests. Two reference tests were applied to assess frailty, either based on Fried's model of a Physical Frailty Phenotype or on the model of deficit accumulation, expressed in a Frailty Index. Prefinal versions of the German translations of each instrument were produced and completed by 52 older participants (mean age: 73 ± 6 years). Some minor issues concerning comprehensibility and semantics of the scales were identified and resolved. Using the Physical Frailty Phenotype (frailty prevalence: 4%) criteria as a reference standard, the accuracy of the instruments was excellent (area under the curve AUC >0.90). Taking the Frailty Index (frailty prevalence: 23%) as the reference standard, the accuracy was good (AUC between 0.73 and 0.88). German language versions of PRISMA-7, FRAIL scale and GFI have been established and preliminary results indicate sufficient diagnostic test accuracy that needs to be further established.
Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R
2017-06-01
To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification.
Lee, Dong Ho
2017-12-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
Imaging evaluation of non-alcoholic fatty liver disease: focused on quantification
2017-01-01
Non-alcoholic fatty liver disease (NAFLD) has been an emerging major health problem, and the most common cause of chronic liver disease in Western countries. Traditionally, liver biopsy has been gold standard method for quantification of hepatic steatosis. However, its invasive nature with potential complication as well as measurement variability are major problem. Thus, various imaging studies have been used for evaluation of hepatic steatosis. Ultrasonography provides fairly good accuracy to detect moderate-to-severe degree hepatic steatosis, but limited accuracy for mild steatosis. Operator-dependency and subjective/qualitative nature of examination are another major drawbacks of ultrasonography. Computed tomography can be considered as an unsuitable imaging modality for evaluation of NAFLD due to potential risk of radiation exposure and limited accuracy in detecting mild steatosis. Both magnetic resonance spectroscopy and magnetic resonance imaging using chemical shift technique provide highly accurate and reproducible diagnostic performance for evaluating NAFLD, and therefore, have been used in many clinical trials as a non-invasive reference of standard method. PMID:28994271
Stack Number Influence on the Accuracy of Aster Gdem (V2)
NASA Astrophysics Data System (ADS)
Mirzadeh, S. M. J.; Alizadeh Naeini, A.; Fatemi, S. B.
2017-09-01
In this research, the influence of stack number (STKN) on the accuracy of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global DEM (GDEM) has been investigated. For this purpose, two data sets of ASTER and Reference DEMs from two study areas with various topography (Bomehen and Tazehabad) were used. The Results show that in both study areas, STKN of 19 results in minimum error so that this minimum error has small difference with other STKN. The analysis of slope, STKN, and error values shows that there is no strong correlation between these parameters in both study areas. For example, the value of mean absolute error increase by changing the topography and the increase of slope values and height on cells but, the changes in STKN has no important effect on error values. Furthermore, according to high values of STKN, effect of slope on elevation accuracy has practically decreased. Also, there is no great correlation between the residual and STKN in ASTER GDEM.
Analysis of RDSS positioning accuracy based on RNSS wide area differential technique
NASA Astrophysics Data System (ADS)
Xing, Nan; Su, RanRan; Zhou, JianHua; Hu, XiaoGong; Gong, XiuQiang; Liu, Li; He, Feng; Guo, Rui; Ren, Hui; Hu, GuangMing; Zhang, Lei
2013-10-01
The BeiDou Navigation Satellite System (BDS) provides Radio Navigation Service System (RNSS) as well as Radio Determination Service System (RDSS). RDSS users can obtain positioning by responding the Master Control Center (MCC) inquiries to signal transmitted via GEO satellite transponder. The positioning result can be calculated with elevation constraint by MCC. The primary error sources affecting the RDSS positioning accuracy are the RDSS signal transceiver delay, atmospheric trans-mission delay and GEO satellite position error. During GEO orbit maneuver, poor orbit forecast accuracy significantly impacts RDSS services. A real-time 3-D orbital correction method based on wide-area differential technique is raised to correct the orbital error. Results from the observation shows that the method can successfully improve positioning precision during orbital maneuver, independent from the RDSS reference station. This improvement can reach 50% in maximum. Accurate calibration of the RDSS signal transceiver delay precision and digital elevation map may have a critical role in high precise RDSS positioning services.
High Power Laser Processing Of Materials
NASA Astrophysics Data System (ADS)
Martyr, D. R.; Holt, T.
1987-09-01
The first practical demonstration of a laser device was in 1960 and in the following years, the high power carbon dioxide laser has matured as an industrial machine tool. Modern carbon dioxide gas lasers can be used for cutting, welding, heat treatment, drilling, scribing and marking. Since their invention over 25 years ago they are now becoming recognised as highly reliable devices capable of achieving huge savings in production costs in many situations. This paper introduces the basic laser processing techniques of cutting, welding and heat treatment as they apply to the most common engineering materials. Typical processing speeds achieved with a wide range of laser powers are reported. Accuracies achievable and fit-up tolerances required are presented. Methods of integrating lasers with machine tools are described and their suitability in a wide range of manufacturing industries is described by reference to recent installations. Examples from small batch manufacturing, high volume production using dedicated laser welding equipment, and high volume manufacturing using 'flexible' automated laser welding equipment are described Future applications of laser processing are suggested by reference to current process developments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petroccia, H; O'Reilly, S; Bolch, W
Purpose: Radiation-induced cancer effects are well-documented following radiotherapy. Further investigation is needed to more accurately determine a dose-response relationship for late radiation effects. Recent dosimetry studies tend to use representative patients (Taylor 2009) or anthropomorphic phantoms (Wirth 2008) for estimating organ mean doses. In this study, we compare hybrid computational phantoms to patient-specific voxel phantoms to test the accuracy of University of Florida Hybrid Phantom Library (UFHP Library) for historical dose reconstructions. Methods: A cohort of 10 patients with CT images was used to reproduce the data that was collected historically for Hodgkin's lymphoma patients (i.e. caliper measurements and photographs).more » Four types of phantoms were generated to show a range of refinement from reference hybrid-computational phantom to patient-specific phantoms. Each patient is matched to a reference phantom from the UFHP Library based on height and weight. The reference phantom is refined in the anterior/posterior direction to create a ‘caliper-scaled phantom’. A photograph is simulated using a surface rendering from segmented CT images. Further refinement in the lateral direction is performed using ratios from a simulated-photograph to create a ‘photograph and caliper-scaled phantom’; breast size and position is visually adjusted. Patient-specific hybrid phantoms, with matched organ volumes, are generated and show the capabilities of the UF Hybrid Phantom Library. Reference, caliper-scaled, photograph and caliper-scaled, and patient-specific hybrid phantoms are compared with patient-specific voxel phantoms to determine the accuracy of the study. Results: Progression from reference phantom to patient specific hybrid shows good agreement with the patient specific voxel phantoms. Each stage of refinement shows an overall trend of improvement in dose accuracy within the study, which suggests that computational phantoms can show improved accuracy in historical dose estimates. Conclusion: Computational hybrid phantoms show promise for improved accuracy within retrospective studies when CTs and other x-ray images are not available.« less
Middleton, Michael S; Haufe, William; Hooker, Jonathan; Borga, Magnus; Dahlqvist Leinhard, Olof; Romu, Thobias; Tunón, Patrik; Hamilton, Gavin; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B
2017-05-01
Purpose To determine the repeatability and accuracy of a commercially available magnetic resonance (MR) imaging-based, semiautomated method to quantify abdominal adipose tissue and thigh muscle volume and hepatic proton density fat fraction (PDFF). Materials and Methods This prospective study was institutional review board- approved and HIPAA compliant. All subjects provided written informed consent. Inclusion criteria were age of 18 years or older and willingness to participate. The exclusion criterion was contraindication to MR imaging. Three-dimensional T1-weighted dual-echo body-coil images were acquired three times. Source images were reconstructed to generate water and calibrated fat images. Abdominal adipose tissue and thigh muscle were segmented, and their volumes were estimated by using a semiautomated method and, as a reference standard, a manual method. Hepatic PDFF was estimated by using a confounder-corrected chemical shift-encoded MR imaging method with hybrid complex-magnitude reconstruction and, as a reference standard, MR spectroscopy. Tissue volume and hepatic PDFF intra- and interexamination repeatability were assessed by using intraclass correlation and coefficient of variation analysis. Tissue volume and hepatic PDFF accuracy were assessed by means of linear regression with the respective reference standards. Results Adipose and thigh muscle tissue volumes of 20 subjects (18 women; age range, 25-76 years; body mass index range, 19.3-43.9 kg/m 2 ) were estimated by using the semiautomated method. Intra- and interexamination intraclass correlation coefficients were 0.996-0.998 and coefficients of variation were 1.5%-3.6%. For hepatic MR imaging PDFF, intra- and interexamination intraclass correlation coefficients were greater than or equal to 0.994 and coefficients of variation were less than or equal to 7.3%. In the regression analyses of manual versus semiautomated volume and spectroscopy versus MR imaging, PDFF slopes and intercepts were close to the identity line, and correlations of determination at multivariate analysis (R 2 ) ranged from 0.744 to 0.994. Conclusion This MR imaging-based, semiautomated method provides high repeatability and accuracy for estimating abdominal adipose tissue and thigh muscle volumes and hepatic PDFF. © RSNA, 2017.
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
Diagnostic Accuracy of the Neck Tornado Test as a New Screening Test in Cervical Radiculopathy
Park, Juyeon; Park, Woo Young; Hong, Seungbae; An, Jiwon; Koh, Jae Chul; Lee, Youn-Woo; Kim, Yong Chan; Choi, Jong Bum
2017-01-01
Background: The Spurling test, although a highly specific provocative test of the cervical spine in cervical radiculopathy (CR), has low to moderate sensitivity. Thus, we introduced the neck tornado test (NTT) to examine the neck and the cervical spine in CR. Objectives: The aim of this study was to introduce a new provocative test, the NTT, and compare the diagnostic accuracy with a widely accepted provocative test, the Spurling test. Design: Retrospective study. Methods: Medical records of 135 subjects with neck pain (CR, n = 67; without CR, n = 68) who had undergone cervical spine magnetic resonance imaging and been referred to the pain clinic between September 2014 and August 2015 were reviewed. Both the Spurling test and NTT were performed in all patients by expert examiners. Sensitivity, specificity, and accuracy were compared for both the Spurling test and the NTT. Results: The sensitivity of the Spurling test and the NTT was 55.22% and 85.07% (P < 0.0001); specificity, 98.53% and 86.76% (P = 0.0026); accuracy, 77.04% and 85.93% (P = 0.0423), respectively. Conclusions: The NTT is more sensitive with superior diagnostic accuracy for CR diagnosed by magnetic resonance imaging than the Spurling test. PMID:28824298
Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James
2017-01-01
Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.
Bianchi, Vincenza; Ivaldi, Alessandra; Raspagni, Alessia; Arfini, Carlo; Vidali, Matteo
2011-01-01
Contrasting data are available on the diagnostic accuracy of carbohydrate-deficient transferrin (CDT) during pregnancy. These differences may depend in part on how CDT was evaluated and expressed. Here, we report on variations of CDT levels in pregnant women using the high performance liquid chromatography (HPLC) candidate reference method. Alanine aminotransferase, aspartate aminotransferase, gamma-glutamyltransferase, mean corpuscular volume, serum transferrin, urine and serum ethyl glucuronide and CDT were measured in 64 women, self-reporting as non-alcohol abusers (age: median 34, IQR: 28-38), at different stages of normal pregnancy (gestational weeks: median 28, IQR: 8-33). CDT was expressed as percentage of disialotransferrin to total transferrin (%CDT). Transferrin was associated with both %CDT (r = 0.66; P < 0.001) and gestational week (r = 0.68; P < 0.001). Interestingly, %CDT was highly correlated with gestational week (r = 0.77; P < 0.001), even after controlling for the effect of transferrin. Moreover, statistically significant differences in %CDT were also evident between women grouped for pregnancy trimester (first trimester: mean 1.01% (SD 0.19); second trimester: 1.30% (SD 0.14); third trimester: 1.53% (SD 0.22); ANOVA P < 0.001). Trend analysis confirmed a proportional increase of %CDT along with pregnancy trimesters (P < 0.001). %CDT, measured with the HPLC candidate reference method, is independently associated with gestational week. Differently from what has been previously reported or expected, the relationship between pregnancy and CDT could be more complex. The diagnostic accuracy of CDT for detecting alcohol abuse in a legal context may be limited in pregnant women and the effect of gestational age should be considered.
ERIC Educational Resources Information Center
Armstrong, Julie; And Others
For Postpartum Education for Parents (PEP) volunteers, this reference guide provides background information about the common concerns of parents. Extensively reviewed for accuracy and content by pediatricians, psychologists, obstetricians, nurses, and childbirth educators, the guide contains a summary discussion of the postpartum infant and…
Devlin, Hugh; Whelton, Christopher
2015-09-01
The aim of this systematic review was to determine the diagnostic accuracy of the mandibular cortical width measurements and porosity in detecting hip osteoporosis. All of the included studies used measurements on panoramic radiographs. Studies were included if they compared the radiographic measurements (or index tests) with central dual energy X-ray absorptiometry (DXA) of the hip as the reference standard. A measure of diagnostic accuracy such as sensitivity and specificity or area under the receiver operating characteristic curve was also required for inclusion. Seven studies were identified. Meta-analysis was not possible because of the heterogeneity of the studies. The studies all demonstrated moderate diagnostic accuracy. If a patient with a thin or porous mandibular cortex is identified by a chance radiographic finding, additional clinical risk factors need to be considered and the patient referred for further investigation with DXA where necessary. © 2013 John Wiley & Sons A/S and The Gerodontology Society. Published by John Wiley & Sons Ltd.
Thematic accuracy of the 1992 National Land-Cover Data for the western United States
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Yang, L.
2004-01-01
The MultiResolution Land Characteristics (MRLC) consortium sponsored production of the National Land Cover Data (NLCD) for the conterminous United States, using Landsat imagery collected on a target year of 1992 (1992 NLCD). Here we report the thematic accuracy of the 1992 NLCD for the six western mapping regions. Reference data were collected in each region for a probability sample of pixels stratified by map land-cover class. Results are reported for each of the six mapping regions with agreement defined as a match between the primary or alternate reference land-cover label and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Overall accuracy at Anderson Level II was low and variable across the regions, ranging from 38% for the Midwest to 70% for the Southwest. Overall accuracy at Anderson Level I was higher and more consistent across the regions, ranging from 82% to 85% for five of the six regions, but only 74% for the South-central region.
Integrative missing value estimation for microarray data.
Hu, Jianjun; Li, Haifeng; Waterman, Michael S; Zhou, Xianghong Jasmine
2006-10-12
Missing value estimation is an important preprocessing step in microarray analysis. Although several methods have been developed to solve this problem, their performance is unsatisfactory for datasets with high rates of missing data, high measurement noise, or limited numbers of samples. In fact, more than 80% of the time-series datasets in Stanford Microarray Database contain less than eight samples. We present the integrative Missing Value Estimation method (iMISS) by incorporating information from multiple reference microarray datasets to improve missing value estimation. For each gene with missing data, we derive a consistent neighbor-gene list by taking reference data sets into consideration. To determine whether the given reference data sets are sufficiently informative for integration, we use a submatrix imputation approach. Our experiments showed that iMISS can significantly and consistently improve the accuracy of the state-of-the-art Local Least Square (LLS) imputation algorithm by up to 15% improvement in our benchmark tests. We demonstrated that the order-statistics-based integrative imputation algorithms can achieve significant improvements over the state-of-the-art missing value estimation approaches such as LLS and is especially good for imputing microarray datasets with a limited number of samples, high rates of missing data, or very noisy measurements. With the rapid accumulation of microarray datasets, the performance of our approach can be further improved by incorporating larger and more appropriate reference datasets.
The diploid genome sequence of an Asian individual
Wang, Jun; Wang, Wei; Li, Ruiqiang; Li, Yingrui; Tian, Geng; Goodman, Laurie; Fan, Wei; Zhang, Junqing; Li, Jun; Zhang, Juanbin; Guo, Yiran; Feng, Binxiao; Li, Heng; Lu, Yao; Fang, Xiaodong; Liang, Huiqing; Du, Zhenglin; Li, Dong; Zhao, Yiqing; Hu, Yujie; Yang, Zhenzhen; Zheng, Hancheng; Hellmann, Ines; Inouye, Michael; Pool, John; Yi, Xin; Zhao, Jing; Duan, Jinjie; Zhou, Yan; Qin, Junjie; Ma, Lijia; Li, Guoqing; Yang, Zhentao; Zhang, Guojie; Yang, Bin; Yu, Chang; Liang, Fang; Li, Wenjie; Li, Shaochuan; Li, Dawei; Ni, Peixiang; Ruan, Jue; Li, Qibin; Zhu, Hongmei; Liu, Dongyuan; Lu, Zhike; Li, Ning; Guo, Guangwu; Zhang, Jianguo; Ye, Jia; Fang, Lin; Hao, Qin; Chen, Quan; Liang, Yu; Su, Yeyang; san, A.; Ping, Cuo; Yang, Shuang; Chen, Fang; Li, Li; Zhou, Ke; Zheng, Hongkun; Ren, Yuanyuan; Yang, Ling; Gao, Yang; Yang, Guohua; Li, Zhuo; Feng, Xiaoli; Kristiansen, Karsten; Wong, Gane Ka-Shu; Nielsen, Rasmus; Durbin, Richard; Bolund, Lars; Zhang, Xiuqing; Li, Songgang; Yang, Huanming; Wang, Jian
2009-01-01
Here we present the first diploid genome sequence of an Asian individual. The genome was sequenced to 36-fold average coverage using massively parallel sequencing technology. We aligned the short reads onto the NCBI human reference genome to 99.97% coverage, and guided by the reference genome, we used uniquely mapped reads to assemble a high-quality consensus sequence for 92% of the Asian individual's genome. We identified approximately 3 million single-nucleotide polymorphisms (SNPs) inside this region, of which 13.6% were not in the dbSNP database. Genotyping analysis showed that SNP identification had high accuracy and consistency, indicating the high sequence quality of this assembly. We also carried out heterozygote phasing and haplotype prediction against HapMap CHB and JPT haplotypes (Chinese and Japanese, respectively), sequence comparison with the two available individual genomes (J. D. Watson and J. C. Venter), and structural variation identification. These variations were considered for their potential biological impact. Our sequence data and analyses demonstrate the potential usefulness of next-generation sequencing technologies for personal genomics. PMID:18987735
Automated color classification of urine dipstick image in urine examination
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Royananda; Muchtar, M. A.; Taqiuddin, R.; Adnan, S.; Anugrahwaty, R.; Budiarto, R.
2018-03-01
Urine examination using urine dipstick has long been used to determine the health status of a person. The economical and convenient use of urine dipstick is one of the reasons urine dipstick is still used to check people health status. The real-life implementation of urine dipstick is done manually, in general, that is by comparing it with the reference color visually. This resulted perception differences in the color reading of the examination results. In this research, authors used a scanner to obtain the urine dipstick color image. The use of scanner can be one of the solutions in reading the result of urine dipstick because the light produced is consistent. A method is required to overcome the problems of urine dipstick color matching and the test reference color that have been conducted manually. The method proposed by authors is Euclidean Distance, Otsu along with RGB color feature extraction method to match the colors on the urine dipstick with the standard reference color of urine examination. The result shows that the proposed approach was able to classify the colors on a urine dipstick with an accuracy of 95.45%. The accuracy of color classification on urine dipstick against the standard reference color is influenced by the level of scanner resolution used, the higher the scanner resolution level, the higher the accuracy.
Orlando Júnior, Nilton; de Souza Leão, Marcos George; de Oliveira, Nelson Henrique Carvalho
2015-01-01
Objectives To ascertain the sensitivity, specificity, accuracy and concordance of the physical examination (PE) and magnetic resonance imaging (MRI) in comparison with arthroscopy, in diagnosing knee injuries. Methods Prospective study on 72 patients, with evaluation and comparison of PE, MRI and arthroscopic findings, to determine the concordance, accuracy, sensitivity and specificity. Results PE showed sensitivity of 75.00%, specificity of 62.50% and accuracy of 69.44% for medial meniscal (MM) lesions, while it showed sensitivity of 47.82%, specificity of 93.87% and accuracy of 79.16% for lateral meniscal (LM) lesions. For anterior cruciate ligament (ACL) injuries, PE showed sensitivity of 88.67%, specificity of 94.73% and accuracy of 90.27%. For MM lesions, MRI showed sensitivity of 92.50%, specificity of 62.50% and accuracy of 69.44%, while for LM injuries, it showed sensitivity of 65.00%, specificity of 88.46% and accuracy of 81.94%. For ACL injuries, MRI showed sensitivity of 86.79%, specificity of 73.68% and accuracy of 83.33%. For ACL injuries, the best concordance was with PE, while for MM and LM lesions, it was with MRI (p < 0.001). Conclusions Meniscal and ligament injuries can be diagnosed through careful physical examination, while requests for MRI are reserved for complex or doubtful cases. PE and MRI used together have high sensitivity for ACL and MM lesions, while for LM lesions the specificity is higher. Level of evidence II – Development of diagnostic criteria on consecutive patients (with universally applied reference “gold” standard). PMID:27218085
Park, Charlie C; Hooker, Catherine; Hooker, Jonathan C; Bass, Emily; Haufe, William; Schlein, Alexandra; Covarrubias, Yesenia; Heba, Elhamy; Bydder, Mark; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Schwimmer, Jeffrey; Hernando, Diego; Reeder, Scott B; Middleton, Michael; Sirlin, Claude B; Hamilton, Gavin
2018-04-29
Improving the signal-to-noise ratio (SNR) of chemical-shift-encoded MRI acquisition with complex reconstruction (MRI-C) may improve the accuracy and precision of noninvasive proton density fat fraction (PDFF) quantification in patients with hepatic steatosis. To assess the accuracy of high SNR (Hi-SNR) MRI-C versus standard MRI-C acquisition to estimate hepatic PDFF in adult and pediatric nonalcoholic fatty liver disease (NAFLD) using an MR spectroscopy (MRS) sequence as the reference standard. Prospective. In all, 231 adult and pediatric patients with known or suspected NAFLD. PDFF estimated at 3T by three MR techniques: standard MRI-C; a Hi-SNR MRI-C variant with increased slice thickness, decreased matrix size, and no parallel imaging; and MRS (reference standard). MRI-PDFF was measured by image analysts using a region of interest coregistered with the MRS-PDFF voxel. Linear regression analyses were used to assess accuracy and precision of MRI-estimated PDFF for MRS-PDFF as a function of MRI-PDFF using the standard and Hi-SNR MRI-C for all patients and for patients with MRS-PDFF <10%. In all, 271 exams from 231 patients were included (mean MRS-PDFF: 12.6% [SD: 10.4]; range: 0.9-41.9). High agreement between MRI-PDFF and MRS-PDFF was demonstrated across the overall range of PDFF, with a regression slope of 1.035 for the standard MRI-C and 1.008 for Hi-SNR MRI-C. Hi-SNR MRI-C, compared to standard MRI-C, provided small but statistically significant improvements in the slope (respectively, 1.008 vs. 1.035, P = 0.004) and mean bias (0.412 vs. 0.673, P < 0.0001) overall. In the low-fat patients only, Hi-SNR MRI-C provided improvements in the slope (1.058 vs. 1.190, P = 0.002), mean bias (0.168 vs. 0.368, P = 0.007), intercept (-0.153 vs. -0.796, P < 0.0001), and borderline improvement in the R 2 (0.888 vs. 0.813, P = 0.01). Compared to standard MRI-C, Hi-SNR MRI-C provides slightly higher MRI-PDFF estimation accuracy across the overall range of PDFF and improves both accuracy and precision in the low PDFF range. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
ERIC Educational Resources Information Center
Horner, Jan; Michaud-Oystryk, Nicole
1995-01-01
An experiment investigated whether the format in which information is stored affects the outcomes of ready reference transactions in terms of efficiency and accuracy. Results indicate that bibliographic questions are more efficiently answered online, while factual questions are more efficiently answered with print sources. Results of the study are…
ERIC Educational Resources Information Center
Harzbecker, Joseph, Jr.
1993-01-01
Describes the National Institute of Health's GenBank DNA sequence database and how it can be accessed through the Internet. A real reference question, which was answered successfully using the database, is reproduced to illustrate and elaborate on the potential of the Internet for information retrieval. (10 references) (KRN)
NASA Astrophysics Data System (ADS)
Suzuki, Yuki; Fung, George S. K.; Shen, Zeyang; Otake, Yoshito; Lee, Okkyun; Ciuffo, Luisa; Ashikaga, Hiroshi; Sato, Yoshinobu; Taguchi, Katsuyuki
2017-03-01
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
NASA Technical Reports Server (NTRS)
Berendes, Todd; Sengupta, Sailes K.; Welch, Ron M.; Wielicki, Bruce A.; Navar, Murgesh
1992-01-01
A semiautomated methodology is developed for estimating cumulus cloud base heights on the basis of high spatial resolution Landsat MSS data, using various image-processing techniques to match cloud edges with their corresponding shadow edges. The cloud base height is then estimated by computing the separation distance between the corresponding generalized Hough transform reference points. The differences between the cloud base heights computed by these means and a manual verification technique are of the order of 100 m or less; accuracies of 50-70 m may soon be possible via EOS instruments.
Kinematics Simulation Analysis of Packaging Robot with Joint Clearance
NASA Astrophysics Data System (ADS)
Zhang, Y. W.; Meng, W. J.; Wang, L. Q.; Cui, G. H.
2018-03-01
Considering the influence of joint clearance on the motion error, repeated positioning accuracy and overall position of the machine, this paper presents simulation analysis of a packaging robot — 2 degrees of freedom(DOF) planar parallel robot based on the characteristics of high precision and fast speed of packaging equipment. The motion constraint equation of the mechanism is established, and the analysis and simulation of the motion error are carried out in the case of turning the revolute clearance. The simulation results show that the size of the joint clearance will affect the movement accuracy and packaging efficiency of the packaging robot. The analysis provides a reference point of view for the packaging equipment design and selection criteria and has a great significance on the packaging industry automation.
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1993-01-01
A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method', is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. Meanwhile, it also avoids the inaccuracy incurred due to geometry and variable interpolations used by the previous Lagrangian methods. The present method is general and capable of treating subsonic flows as well as supersonic flows. The method proposed in this paper is robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multidimensional discontinuities with a high level of accuracy, similar to that found in 1D problems.
Instrument Pointing Capabilities: Past, Present, and Future
NASA Technical Reports Server (NTRS)
Blackmore, Lars; Murray, Emmanuell; Scharf, Daniel P.; Aung, Mimi; Bayard, David; Brugarolas, Paul; Hadaegh, Fred; Lee, Allan; Milman, Mark; Sirlin, Sam;
2011-01-01
This paper surveys the instrument pointing capabilities of past, present and future space telescopes and interferometers. As an important aspect of this survey, we present a taxonomy for "apples-to-apples" comparisons of pointing performances. First, pointing errors are defined relative to either an inertial frame or a celestial target. Pointing error can then be further sub-divided into DC, that is, steady state, and AC components. We refer to the magnitude of the DC error relative to the inertial frame as absolute pointing accuracy, and we refer to the magnitude of the DC error relative to a celestial target as relative pointing accuracy. The magnitude of the AC error is referred to as pointing stability. While an AC/DC partition is not new, we leverage previous work by some of the authors to quantitatively clarify and compare varying definitions of jitter and time window averages. With this taxonomy and for sixteen past, present, and future missions, pointing accuracies and stabilities, both required and achieved, are presented. In addition, we describe the attitude control technologies used to and, for future missions, planned to achieve these pointing performances.
Uribe, S; Rojas, LA; Rosas, CF
2013-01-01
The objective of this review is to evaluate the diagnostic accuracy of imaging methods for detection of mandibular bone tissue invasion by squamous cell carcinoma (SCC). A systematic review was carried out of studies in MEDLINE, SciELO and ScienceDirect, published between 1960 and 2012, in English, Spanish or German, which compared detection of mandibular bone tissue invasion via different imaging tests against a histopathology reference standard. Sensitivity and specificity data were extracted from each study. The outcome measure was diagnostic accuracy. We found 338 articles, of which 5 fulfilled the inclusion criteria. Tests included were: CT (four articles), MRI (four articles), panoramic radiography (one article), positron emission tomography (PET)/CT (one article) and cone beam CT (CBCT) (one article). The quality of articles was low to moderate and the evidence showed that all tests have a high diagnostic accuracy for detection of mandibular bone tissue invasion by SCC, with sensitivity values of 94% (MRI), 91% (CBCT), 83% (CT) and 55% (panoramic radiography), and specificity values of 100% (CT, MRI, CBCT), 97% (PET/CT) and 91.7% (panoramic radiography). Available evidence is scarce and of only low to moderate quality. However, it is consistently shown that current imaging methods give a moderate to high diagnostic accuracy for the detection of mandibular bone tissue invasion by SCC. Recommendations are given for improving the quality of future reports, in particular provision of a detailed description of the patients' conditions, the imaging instrument and both imaging and histopathological invasion criteria. PMID:23420854
Real-time teleophthalmology versus face-to-face consultation: A systematic review.
Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W
2017-08-01
Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.
ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goel, Supriya; Rosenberg, Michael I.; Eley, Charles
This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users ofmore » the PRM.« less
A reference tristimulus colorimeter
NASA Astrophysics Data System (ADS)
Eppeldauer, George P.
2002-06-01
A reference tristimulus colorimeter has been developed at NIST with a transmission-type silicon trap detector (1) and four temperature-controlled filter packages to realize the Commission Internationale de l'Eclairage (CIE) x(λ), y(λ) and z(λ) color matching functions (2). Instead of lamp standards, high accuracy detector standards are used for the colorimeter calibration. A detector-based calibration procedure is being suggested for tristimulus colorimeters wehre the absolute spectral responsivity of the tristimulus channels is determined. Then, color (spectral) correct and peak (amplitude) normalization are applied to minimize uncertainties caused by the imperfect realizations of the CIE functions. As a result of the corrections, the chromaticity coordinates of stable light sources with different spectral power distributions can be measured with uncertainties less than 0.0005 (k=1).
Quantitative 1H NMR: Development and Potential of an Analytical Method – an Update
Pauli, Guido F.; Gödecke, Tanja; Jaki, Birgit U.; Lankin, David C.
2012-01-01
Covering the literature from mid-2004 until the end of 2011, this review continues a previous literature overview on quantitative 1H NMR (qHNMR) methodology and its applications in the analysis of natural products (NPs). Among the foremost advantages of qHNMR is its accurate function with external calibration, the lack of any requirement for identical reference materials, a high precision and accuracy when properly validated, and an ability to quantitate multiple analytes simultaneously. As a result of the inclusion of over 170 new references, this updated review summarizes a wealth of detailed experiential evidence and newly developed methodology that supports qHNMR as a valuable and unbiased analytical tool for natural product and other areas of research. PMID:22482996
Selection of reference genes for miRNA qRT-PCR under abiotic stress in grapevine.
Luo, Meng; Gao, Zhen; Li, Hui; Li, Qin; Zhang, Caixi; Xu, Wenping; Song, Shiren; Ma, Chao; Wang, Shiping
2018-03-13
Grapevine is among the fruit crops with high economic value, and because of the economic losses caused by abiotic stresses, the stress resistance of Vitis vinifera has become an increasingly important research area. Among the mechanisms responding to environmental stresses, the role of miRNA has received much attention recently. qRT-PCR is a powerful method for miRNA quantitation, but the accuracy of the method strongly depends on the appropriate reference genes. To determine the most suitable reference genes for grapevine miRNA qRT-PCR, 15 genes were chosen as candidate reference genes. After eliminating 6 candidate reference genes with unsatisfactory amplification efficiency, the expression stability of the remaining candidate reference genes under salinity, cold and drought was analysed using four algorithms, geNorm, NormFinder, deltaCt and Bestkeeper. The results indicated that U6 snRNA was the most suitable reference gene under salinity and cold stresses; whereas miR168 was the best for drought stress. The best reference gene sets for salinity, cold and drought stresses were miR160e + miR164a, miR160e + miR168 and ACT + UBQ + GAPDH, respectively. The selected reference genes or gene sets were verified using miR319 or miR408 as the target gene.
NASA Astrophysics Data System (ADS)
Acero, R.; Santolaria, J.; Pueo, M.; Aguilar, J. J.; Brau, A.
2015-11-01
High-range measuring equipment like laser trackers need large dimension calibrated reference artifacts in their calibration and verification procedures. In this paper, a new verification procedure for portable coordinate measuring instruments based on the generation and evaluation of virtual distances with an indexed metrology platform is developed. This methodology enables the definition of an unlimited number of reference distances without materializing them in a physical gauge to be used as a reference. The generation of the virtual points and reference lengths derived is linked to the concept of the indexed metrology platform and the knowledge of the relative position and orientation of its upper and lower platforms with high accuracy. It is the measuring instrument together with the indexed metrology platform one that remains still, rotating the virtual mesh around them. As a first step, the virtual distances technique is applied to a laser tracker in this work. The experimental verification procedure of the laser tracker with virtual distances is simulated and further compared with the conventional verification procedure of the laser tracker with the indexed metrology platform. The results obtained in terms of volumetric performance of the laser tracker proved the suitability of the virtual distances methodology in calibration and verification procedures for portable coordinate measuring instruments, broadening and expanding the possibilities for the definition of reference distances in these procedures.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Liu, Xiao-jing; Li, Qian-qian; Pang, Yuan-jie; Tian, Kai-yue; Xie, Zheng; Li, Zi-li
2015-06-01
As computer-assisted surgical design becomes increasingly popular in maxillofacial surgery, recording patients' natural head position (NHP) and reproducing it in the virtual environment are vital for preoperative design and postoperative evaluation. Our objective was to test the repeatability and accuracy of recording NHP using a multicamera system and a laser level. A laser level was used to project a horizontal reference line on a physical model, and a 3-dimensional image was obtained using a multicamera system. In surgical simulation software, the recorded NHP was reproduced in the virtual head position by registering the coordinate axes with the horizontal reference on both the frontal and lateral views. The repeatability and accuracy of the method were assessed using a gyroscopic procedure as the gold standard. The interclass correlation coefficients for pitch and roll were 0.982 (0.966, 0.991) and 0.995 (0.992, 0.998), respectively, indicating a high degree of repeatability. Regarding accuracy, the lack of agreement in orientation between the new method and the gold standard was within the ranges for pitch (-0.69°, 1.71°) and for roll (-0.92°, 1.20°); these have no clinical significance. This method of recording and reproducing NHP with a multicamera system and a laser level is repeatable, accurate, and clinically feasible. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Validation of a 3D CT method for measurement of linear wear of acetabular cups
2011-01-01
Background We evaluated the accuracy and repeatability of a 3D method for polyethylene acetabular cup wear measurements using computed tomography (CT). We propose that the method be used for clinical in vivo assessment of wear in acetabular cups. Material and methods Ultra-high molecular weight polyethylene cups with a titanium mesh molded on the outside were subjected to wear using a hip simulator. Before and after wear, they were (1) imaged with a CT scanner using a phantom model device, (2) measured using a coordinate measurement machine (CMM), and (3) weighed. CMM was used as the reference method for measurement of femoral head penetration into the cup and for comparison with CT, and gravimetric measurements were used as a reference for both CT and CMM. Femoral head penetration and wear vector angle were studied. The head diameters were also measured with both CMM and CT. The repeatability of the method proposed was evaluated with two repeated measurements using different positions of the phantom in the CT scanner. Results The accuracy of the 3D CT method for evaluation of linear wear was 0.51 mm and the repeatability was 0.39 mm. Repeatability for wear vector angle was 17°. Interpretation This study of metal-meshed hip-simulated acetabular cups shows that CT has the capacity for reliable measurement of linear wear of acetabular cups at a clinically relevant level of accuracy. PMID:21281259
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Yuceler, Zeyneb; Kantarci, Mecit; Yuce, Ihsan; Kizrak, Yesim; Bayraktutan, Ummugulsum; Ogul, Hayri; Kiris, Adem; Celik, Omer; Pirimoglu, Berhan; Genc, Berhan; Gundogdu, Fuat
2014-01-01
Our aim was to evaluate the diagnostic accuracy of 256-slice, high-pitch mode multidetector computed tomography (MDCT) for coronary artery bypass graft (CABG) patency. Eighty-eight patients underwent 256-slice MDCT angiography to evaluate their graft patency after CABG surgery using a prospectively synchronized electrocardiogram in the high-pitch spiral acquisition mode. Effective radiation doses were calculated. We investigated the diagnostic accuracy of high-pitch, low-dose, prospective, electrocardiogram-triggering, dual-source MDCT for CABG patency compared with catheter coronary angiography imaging findings. A total of 215 grafts and 645 vessel segments were analyzed. All graft segments had diagnostic image quality. The proximal and middle graft segments had significantly (P < 0.05) better mean image quality scores (1.18 ± 0.4) than the distal segments (1.31 ± 0.5). Using catheter coronary angiography as the reference standard, high-pitch MDCT had the following sensitivity, specificity, positive predictive value, and negative predictive value of per-segment analysis for detecting graft patency: 97.1%, 99.6%, 94.4%, and 99.8%, respectively. In conclusion, MDCT can be used noninvasively with a lower radiation dose for the assessment of restenosis in CABG patients.
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
Biglands, John D; Ibraheem, Montasir; Magee, Derek R; Radjenovic, Aleksandra; Plein, Sven; Greenwood, John P
2018-05-01
This study sought to compare the diagnostic accuracy of visual and quantitative analyses of myocardial perfusion cardiovascular magnetic resonance against a reference standard of quantitative coronary angiography. Visual analysis of perfusion cardiovascular magnetic resonance studies for assessing myocardial perfusion has been shown to have high diagnostic accuracy for coronary artery disease. However, only a few small studies have assessed the diagnostic accuracy of quantitative myocardial perfusion. This retrospective study included 128 patients randomly selected from the CE-MARC (Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease) study population such that the distribution of risk factors and disease status was proportionate to the full population. Visual analysis results of cardiovascular magnetic resonance perfusion images, by consensus of 2 expert readers, were taken from the original study reports. Quantitative myocardial blood flow estimates were obtained using Fermi-constrained deconvolution. The reference standard for myocardial ischemia was a quantitative coronary x-ray angiogram stenosis severity of ≥70% diameter in any coronary artery of >2 mm diameter, or ≥50% in the left main stem. Diagnostic performance was calculated using receiver-operating characteristic curve analysis. The area under the curve for visual analysis was 0.88 (95% confidence interval: 0.81 to 0.95) with a sensitivity of 81.0% (95% confidence interval: 69.1% to 92.8%) and specificity of 86.0% (95% confidence interval: 78.7% to 93.4%). For quantitative stress myocardial blood flow the area under the curve was 0.89 (95% confidence interval: 0.83 to 0.96) with a sensitivity of 87.5% (95% confidence interval: 77.3% to 97.7%) and specificity of 84.5% (95% confidence interval: 76.8% to 92.3%). There was no statistically significant difference between the diagnostic performance of quantitative and visual analyses (p = 0.72). Incorporating rest myocardial blood flow values to generate a myocardial perfusion reserve did not significantly increase the quantitative analysis area under the curve (p = 0.79). Quantitative perfusion has a high diagnostic accuracy for detecting coronary artery disease but is not superior to visual analysis. The incorporation of rest perfusion imaging does not improve diagnostic accuracy in quantitative perfusion analysis. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Dilthey, Alexander T; Gourraud, Pierre-Antoine; Mentzer, Alexander J; Cereb, Nezih; Iqbal, Zamin; McVean, Gil
2016-10-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30-250 CPU hours per sample) remain a significant challenge to practical application.
High-Accuracy HLA Type Inference from Whole-Genome Sequencing Data Using Population Reference Graphs
Dilthey, Alexander T.; Gourraud, Pierre-Antoine; McVean, Gil
2016-01-01
Genetic variation at the Human Leucocyte Antigen (HLA) genes is associated with many autoimmune and infectious disease phenotypes, is an important element of the immunological distinction between self and non-self, and shapes immune epitope repertoires. Determining the allelic state of the HLA genes (HLA typing) as a by-product of standard whole-genome sequencing data would therefore be highly desirable and enable the immunogenetic characterization of samples in currently ongoing population sequencing projects. Extensive hyperpolymorphism and sequence similarity between the HLA genes, however, pose problems for accurate read mapping and make HLA type inference from whole-genome sequencing data a challenging problem. We describe how to address these challenges in a Population Reference Graph (PRG) framework. First, we construct a PRG for 46 (mostly HLA) genes and pseudogenes, their genomic context and their characterized sequence variants, integrating a database of over 10,000 known allele sequences. Second, we present a sequence-to-PRG paired-end read mapping algorithm that enables accurate read mapping for the HLA genes. Third, we infer the most likely pair of underlying alleles at G group resolution from the IMGT/HLA database at each locus, employing a simple likelihood framework. We show that HLA*PRG, our algorithm, outperforms existing methods by a wide margin. We evaluate HLA*PRG on six classical class I and class II HLA genes (HLA-A, -B, -C, -DQA1, -DQB1, -DRB1) and on a set of 14 samples (3 samples with 2 x 100bp, 11 samples with 2 x 250bp Illumina HiSeq data). Of 158 alleles tested, we correctly infer 157 alleles (99.4%). We also identify and re-type two erroneous alleles in the original validation data. We conclude that HLA*PRG for the first time achieves accuracies comparable to gold-standard reference methods from standard whole-genome sequencing data, though high computational demands (currently ~30–250 CPU hours per sample) remain a significant challenge to practical application. PMID:27792722
Guidance Provided to Authors on Citing and Formatting References in Nursing Journals
Nicoll, Leslie H.; Oermann, Marilyn H.; Chinn, Peggy L.; Conklin, Jamie L.; Amarasekara, Sathya; McCarty, Midori
2018-01-01
Reference citations should be accurate, complete, and presented in a consistent format. This study analyzed information provided to authors on preparing citations and references for manuscripts submitted to nursing journals (n = 209). Half of the journals used the American Psychological Association reference style. Slightly more than half provided examples of how to cite articles and books; there were fewer examples of citing websites and online journals. Suggestions on improving accuracy of references are discussed. PMID:29346137
A Comparative Study of Different EEG Reference Choices for Diagnosing Unipolar Depression.
Mumtaz, Wajid; Malik, Aamir Saeed
2018-06-02
The choice of an electroencephalogram (EEG) reference has fundamental importance and could be critical during clinical decision-making because an impure EEG reference could falsify the clinical measurements and subsequent inferences. In this research, the suitability of three EEG references was compared while classifying depressed and healthy brains using a machine-learning (ML)-based validation method. In this research, the EEG data of 30 unipolar depressed subjects and 30 age-matched healthy controls were recorded. The EEG data were analyzed in three different EEG references, the link-ear reference (LE), average reference (AR), and reference electrode standardization technique (REST). The EEG-based functional connectivity (FC) was computed. Also, the graph-based measures, such as the distances between nodes, minimum spanning tree, and maximum flow between the nodes for each channel pair, were calculated. An ML scheme provided a mechanism to compare the performances of the extracted features that involved a general framework such as the feature extraction (graph-based theoretic measures), feature selection, classification, and validation. For comparison purposes, the performance metrics such as the classification accuracies, sensitivities, specificities, and F scores were computed. When comparing the three references, the diagnostic accuracy showed better performances during the REST, while the LE and AR showed less discrimination between the two groups. Based on the results, it can be concluded that the choice of appropriate reference is critical during the clinical scenario. The REST reference is recommended for future applications of EEG-based diagnosis of mental illnesses.
Heba, Elhamy R.; Desai, Ajinkya; Zand, Kevin A.; Hamilton, Gavin; Wolfson, Tanya; Schlein, Alexandra N.; Gamst, Anthony; Loomba, Rohit; Sirlin, Claude B.; Middleton, Michael S.
2016-01-01
Purpose To determine the accuracy and the effect of possible subject-based confounders of magnitude-based magnetic resonance imaging (MRI) for estimating hepatic proton density fat fraction (PDFF) for different numbers of echoes in adults with known or suspected nonalcoholic fatty liver disease, using MR spectroscopy (MRS) as a reference. Materials and Methods In this retrospective analysis of 506 adults, hepatic PDFF was estimated by unenhanced 3.0T MRI, using right-lobe MRS as reference. Regions of interest placed on source images and on six-echo parametric PDFF maps were colocalized to MRS voxel location. Accuracy using different numbers of echoes was assessed by regression and Bland–Altman analysis; slope, intercept, average bias, and R2 were calculated. The effect of age, sex, and body mass index (BMI) on hepatic PDFF accuracy was investigated using multivariate linear regression analyses. Results MRI closely agreed with MRS for all tested methods. For three- to six-echo methods, slope, regression intercept, average bias, and R2 were 1.01–0.99, 0.11–0.62%, 0.24–0.56%, and 0.981–0.982, respectively. Slope was closest to unity for the five-echo method. The two-echo method was least accurate, underestimating PDFF by an average of 2.93%, compared to an average of 0.23–0.69% for the other methods. Statistically significant but clinically nonmeaningful effects on PDFF error were found for subject BMI (P range: 0.0016 to 0.0783), male sex (P range: 0.015 to 0.037), and no statistically significant effect was found for subject age (P range: 0.18–0.24). Conclusion Hepatic magnitude-based MRI PDFF estimates using three, four, five, and six echoes, and six-echo parametric maps are accurate compared to reference MRS values, and that accuracy is not meaningfully confounded by age, sex, or BMI. PMID:26201284
Reference Gauging System for a Small-Scale Liquid Hydrogen Tank
NASA Technical Reports Server (NTRS)
VanDresar, Neil T.; Siegwarth, James D.
2003-01-01
A system to accurately weigh the fluid contents of a small-scale liquid hydrogen test tank has been experimentally verified. It is intended for use as a reference or benchmark system when testing lowgravity liquid quantity gauging concepts in the terrestrial environment. The reference gauging system has shown a repeatable measurement accuracy of better than 0.5 percent of the full tank liquid weight. With further refinement, the system accuracy can be improved to within 0.10 percent of full scale. This report describes the weighing system design, calibration, and operational results. Suggestions are given for further refinement of the system. An example is given to illustrate additional sources of uncertainty when mass measurements are converted to volume equivalents. Specifications of the companion test tank and its multi-layer insulation system are provided.
Laser Truss Sensor for Segmented Telescope Phasing
NASA Technical Reports Server (NTRS)
Liu, Duncan T.; Lay, Oliver P.; Azizi, Alireza; Erlig, Herman; Dorsky, Leonard I.; Asbury, Cheryl G.; Zhao, Feng
2011-01-01
A paper describes the laser truss sensor (LTS) for detecting piston motion between two adjacent telescope segment edges. LTS is formed by two point-to-point laser metrology gauges in a crossed geometry. A high-resolution (<30 nm) LTS can be implemented with existing laser metrology gauges. The distance change between the reference plane and the target plane is measured as a function of the phase change between the reference and target beams. To ease the bandwidth requirements for phase detection electronics (or phase meter), homodyne or heterodyne detection techniques have been used. The phase of the target beam also changes with the refractive index of air, which changes with the air pressure, temperature, and humidity. This error can be minimized by enclosing the metrology beams in baffles. For longer-term (weeks) tracking at the micron level accuracy, the same gauge can be operated in the absolute metrology mode with an accuracy of microns; to implement absolute metrology, two laser frequencies will be used on the same gauge. Absolute metrology using heterodyne laser gauges is a demonstrated technology. Complexity of laser source fiber distribution can be optimized using the range-gated metrology (RGM) approach.
Spatiotemporal Local-Remote Senor Fusion (ST-LRSF) for Cooperative Vehicle Positioning.
Jeong, Han-You; Nguyen, Hoa-Hung; Bhawiyuga, Adhitya
2018-04-04
Vehicle positioning plays an important role in the design of protocols, algorithms, and applications in the intelligent transport systems. In this paper, we present a new framework of spatiotemporal local-remote sensor fusion (ST-LRSF) that cooperatively improves the accuracy of absolute vehicle positioning based on two state estimates of a vehicle in the vicinity: a local sensing estimate, measured by the on-board exteroceptive sensors, and a remote sensing estimate, received from neighbor vehicles via vehicle-to-everything communications. Given both estimates of vehicle state, the ST-LRSF scheme identifies the set of vehicles in the vicinity, determines the reference vehicle state, proposes a spatiotemporal dissimilarity metric between two reference vehicle states, and presents a greedy algorithm to compute a minimal weighted matching (MWM) between them. Given the outcome of MWM, the theoretical position uncertainty of the proposed refinement algorithm is proven to be inversely proportional to the square root of matching size. To further reduce the positioning uncertainty, we also develop an extended Kalman filter model with the refined position of ST-LRSF as one of the measurement inputs. The numerical results demonstrate that the proposed ST-LRSF framework can achieve high positioning accuracy for many different scenarios of cooperative vehicle positioning.
Assessment of NPP VIIRS Albedo Over Heterogeneous Crop Land in Northern China
NASA Astrophysics Data System (ADS)
Wu, Xiaodan; Wen, Jianguang; Xiao, Qing; Yu, Yunyue; You, Dongqin; Hueni, Andreas
2017-12-01
In this paper, the accuracy of Suomi National Polar-orbiting Partnership Visible Infrared Imaging Radiometer Suite (VIIRS) land surface albedo, which is derived from the direct estimation algorithm, was assessed using ground-based albedo observations from a wireless sensor network over a heterogeneous cropland in the Huailai station, northern China. Data from six nodes spanning 2013-2014 over vegetation, bare soil, and mixed terrain surfaces were utilized to provide ground reference at VIIRS pixel scale. The performance of VIIRS albedo was also compared with Global LAnd Surface Satellite (GLASS) and Moderate Resolution Imaging Spectroradiometer (MODIS) albedos (Collection 5 and 6). The results indicate that the current granular VIIRS albedo has a high accuracy with a root-mean-square error of 0.02 for typical land covers. They are significantly correlated with ground references indicated by a correlation coefficient (R) of 0.73. The VIIRS albedo shows distinct advantages to GLASS and MODIS albedos over bare soil and mixed-cover surfaces, while it is inferior to the other two products over vegetated surfaces. Furthermore, its time continuity and the ability to capture the abrupt change of surface albedo are better than that of GLASS and MODIS albedo.
Fujisada, H.; Bailey, G.B.; Kelly, Glen G.; Hara, S.; Abrams, M.J.
2005-01-01
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data.
Predicting the accuracy of ligand overlay methods with Random Forest models.
Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J
2008-12-01
The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.