Science.gov

Sample records for accuracy assessment methods

  1. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  2. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  3. Accuracy of a semiquantitative method for Dermal Exposure Assessment (DREAM)

    PubMed Central

    van Wendel, de Joo... B; Vermeulen, R; van Hemmen, J J; Fransman, W; Kromhout, H

    2005-01-01

    Background: The authors recently developed a Dermal Exposure Assessment Method (DREAM), an observational semiquantitative method to assess dermal exposures by systematically evaluating exposure determinants using pre-assigned default values. Aim: To explore the accuracy of the DREAM method by comparing its estimates with quantitative dermal exposure measurements in several occupational settings. Methods: Occupational hygienists observed workers performing a certain task, whose exposure to chemical agents on skin or clothing was measured quantitatively simultaneously, and filled in the DREAM questionnaire. DREAM estimates were compared with measurement data by estimating Spearman correlation coefficients for each task and for individual observations. In addition, mixed linear regression models were used to study the effect of DREAM estimates on the variability in measured exposures between tasks, between workers, and from day to day. Results: For skin exposures, spearman correlation coefficients for individual observations ranged from 0.19 to 0.82. DREAM estimates for exposure levels on hands and forearms showed a fixed effect between and within surveys, explaining mainly between-task variance. In general, exposure levels on clothing layer were only predicted in a meaningful way by detailed DREAM estimates, which comprised detailed information on the concentration of the agent in the formulation to which exposure occurred. Conclusions: The authors expect that the DREAM method can be successfully applied for semiquantitative dermal exposure assessment in epidemiological and occupational hygiene surveys of groups of workers with considerable contrast in dermal exposure levels (variability between groups >1.0). For surveys with less contrasting exposure levels, quantitative dermal exposure measurements are preferable. PMID:16109819

  4. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  5. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  6. A comparison of two in vitro methods for assessing the fitting accuracy of composite inlays.

    PubMed

    Qualtrough, A J; Piddock, V; Kypreou, V

    1993-06-19

    Composite inlays were fabricated in standardised cavities cut into aluminum and perspex blocks using a computer controlled milling process. Four materials were used to construct the inlays. These were fabricated using an indirect technique following the manufacturers' recommendations, where applicable. In addition, for one of the composites, the fabrication procedures were modified. The fitting accuracy of the restorations was assessed by taking elastomeric impression wash replicas of the luting space and by examination of sectioned restored units using image analysis. The former method indicated significantly reduced fitting accuracy when either use of die spacer or secondary curing was omitted from restoration construction resulting in incomplete seating. The sectioning technique indicated that more factors appeared to significantly reduce fitting accuracy including bulk packing, alteration in curing time, omission to die spacer and the final polishing procedure. This method also provided more specific information concerning sites of premature contact. One material gave rise to significantly greater film thicknesses using both methods of assessment. No direct correlation was found between the two techniques of fit evaluation but both methods taken together provided complementary information.

  7. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  8. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  9. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  10. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  11. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  12. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  13. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  14. Accuracy of field methods in assessing body fat in collegiate baseball players.

    PubMed

    Loenneke, Jeremy P; Wray, Mandy E; Wilson, Jacob M; Barnes, Jeremy T; Kearney, Monica L; Pujol, Thomas J

    2013-01-01

    When assessing the fitness levels of athletes, body composition is usually estimated, as it may play a role in athletic performance. Therefore, the purpose of this study was to determine the validity of bioelectrical impedance analysis (BIA) and skinfold (SKF) methods compared with dual-energy X-ray absorptiometry (DXA) for estimating percent body fat (%BF) in Division 1 collegiate baseball players (n = 35). The results of this study indicate that the field methods investigated were not valid compared with DXA for estimating %BF. In conclusion, this study does not support the use of the TBF-350, HBF-306, HBF-500, or SKF thickness for estimating %BF in collegiate baseball players. The reliability of these BIA devices remains unknown; therefore, it is currently uncertain if they may be used to track changes over time.

  15. Assessing the accuracy of auralizations computed using a hybrid geometrical-acoustics and wave-acoustics method

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi

    2001-05-01

    When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.

  16. Accuracy of ELISA detection methods for gluten and reference materials: a realistic assessment.

    PubMed

    Diaz-Amigo, Carmen; Popping, Bert

    2013-06-19

    The determination of prolamins by ELISA and subsequent conversion of the resulting concentration to gluten content in food appears to be a comparatively simple and straightforward process with which many laboratories have years-long experience. At the end of the process, a value of gluten, expressed in mg/kg or ppm, is obtained. This value often is the basis for the decision if a product can be labeled gluten-free or not. On the basis of currently available scientific information, the accuracy of the obtained values with commonly used commercial ELISA kits has to be questioned. Although recently several multilaboratory studies have been conducted in an attempt to emphasize and ensure the accuracy of the results, data suggest that it was the precision of these assays, not the accuracy, that was confirmed because some of the underlying assumptions for calculating the gluten content lack scientific data support as well as appropriate reference materials for comparison. This paper discusses the issues of gluten determination and quantification with respect to antibody specificity, extraction procedures, reference materials, and their commutability.

  17. Assessment of accuracy of in-situ methods for measuring building-envelope thermal resistance

    SciTech Connect

    Fang, J.B.; Grot, R.A.; Park, H.S.

    1986-03-01

    A series of field and laboratory tests were conducted to evaluate the accuracy of in-situ thermal-resistance-measurement techniques. The results of thermal-performance evaluation of the exterior walls of six thermal mass test houses situated in Gaithersburg, Maryland are presented. The wall construction of these one-room houses includes insulated light-weight wood frame, uninsulated light-weight wood frame, insulated masonry with outside mass, uninsulated masonry, log, and insulated masonry with inside mass. In-situ measurements of heat transfer through building envelopes were made with heat flux transducers and portable calorimeters.

  18. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  19. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  20. Comparative study of application accuracy of two frameless neuronavigation systems: experimental error assessment quantifying registration methods and clinically influencing factors.

    PubMed

    Paraskevopoulos, Dimitrios; Unterberg, Andreas; Metzner, Roland; Dreyhaupt, Jens; Eggers, Georg; Wirtz, Christian Rainer

    2010-04-01

    This study aimed at comparing the accuracy of two commercial neuronavigation systems. Error assessment and quantification of clinical factors and surface registration, often resulting in decreased accuracy, were intended. Active (Stryker Navigation) and passive (VectorVision Sky, BrainLAB) neuronavigation systems were tested with an anthropomorphic phantom with a deformable layer, simulating skin and soft tissue. True coordinates measured by computer numerical control were compared with coordinates on image data and during navigation, to calculate software and system accuracy respectively. Comparison of image and navigation coordinates was used to evaluate navigation accuracy. Both systems achieved an overall accuracy of <1.5 mm. Stryker achieved better software accuracy, whereas BrainLAB better system and navigation accuracy. Factors with conspicuous influence (P<0.01) were imaging, instrument replacement, sterile cover drape and geometry of instruments. Precision data indicated by the systems did not reflect measured accuracy in general. Surface matching resulted in no improvement of accuracy, confirming former studies. Laser registration showed no differences compared to conventional pointers. Differences between the two systems were limited. Surface registration may improve inaccurate point-based registrations but does not in general affect overall accuracy. Accuracy feedback by the systems does not always match with true target accuracy and requires critical evaluation from the surgeon.

  1. Disease severity estimates - effects of rater accuracy and assessments methods for comparing treatments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assessment of disease is fundamental to the discipline of plant pathology, and estimates of severity are often made visually. However, it is established that visual estimates can be inaccurate and unreliable. In this study estimates of Septoria leaf blotch on leaves of winter wheat from non-treated ...

  2. Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation

    NASA Astrophysics Data System (ADS)

    Ojeda-May, Pedro; Pu, Jingzhi

    2014-04-01

    We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ˜10-4 and ˜10-7 energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp.

  3. Assessing the accuracy of some popular DFT methods for computing harmonic vibrational frequencies of water clusters

    NASA Astrophysics Data System (ADS)

    Howard, J. Coleman; Enyard, Jordan D.; Tschumper, Gregory S.

    2015-12-01

    A wide range of density functional theory (DFT) methods (37 altogether), including pure, hybrid, range-separated hybrid, double-hybrid, and dispersion-corrected functionals, have been employed to compute the harmonic vibrational frequencies of eight small water clusters ranging in size from the dimer to four different isomers of the hexamer. These computed harmonic frequencies have been carefully compared to recently published benchmark values that are expected to be very close to the CCSD(T) complete basis set limit. Of the DFT methods examined here, ωB97 and ωB97X are the most consistently accurate, deviating from the reference values by less than 20 cm-1 on average and never more than 60 cm-1. The performance of double-hybrid methods including B2PLYP and mPW2-PLYP is only slightly better than more economical approaches, such as the M06-L pure functional and the M06-2X hybrid functional. Additionally, dispersion corrections offer very little improvement in computed frequencies.

  4. Assessing accuracy of measurements for a Wingate Test using the Taguchi method.

    PubMed

    Franklin, Kathryn L; Gordon, Rae S; Davies, Bruce; Baker, Julien S

    2008-01-01

    The purpose of this study was to establish the effects of four variables on the results obtained for a Wingate Anaerobic Test (WAnT). This study used a 30 second WAnT and compared data collection and analysed in different ways in order to form conclusions as to the relative importance of the variables on the results. Data was collected simultaneously by a commercially available software correction system manufactured by Cranlea Ltd., (Birmingham, England) system and an alternative method of data collection which involves the direct measurement of the flywheel velocity and the brake force. Data was compared using a design of experiments technique, the Taguchi method. Four variables were examined - flywheel speed, braking force, moment of inertia of the flywheel, and time intervals over which the work and power were calculated. The choice of time interval was identified as the most influential variable on the results. While the other factors have an influence on the results, the decreased time interval over which the data is averaged gave 9.8% increase in work done, 40.75% increase in peak power and 13.1% increase in mean power. PMID:18373285

  5. An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael G.; Holloway, Lois

    2013-03-15

    Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone, seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational

  6. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  7. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  8. Orbit accuracy assessment for Seasat

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.

    1980-01-01

    Laser range measurements are used to determine the orbit of Seasat during the period from July 28, 1978, to Aug. 14, 1978, and the influence of the gravity field, atmospheric drag, and solar radiation pressure on the orbit accuracy is investigated. It is noted that for the orbits of three-day duration, little distinction can be made between the influence of different atmospheric models. It is found that the special Seasat gravity field PGS-S3 is most consistent with the data for three-day orbits, but an unmodeled systematic effect in radiation pressure is noted. For orbits of 18-day duration, little distinction can be made between the results derived from the PGS gravity fields. It is also found that the geomagnetic field is an influential factor in the atmospheric modeling during this time period. Seasat altimeter measurements are used to determine the accuracy of the altimeter measurement time tag and to evaluate the orbital accuracy.

  9. Assessing the Accuracy of Two Enhanced Sampling Methods Using EGFR Kinase Transition Pathways: The Influence of Collective Variable Choice.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Shan, Yibing; Scarpazza, Daniele P; Shaw, David E

    2014-07-01

    Structurally elucidating transition pathways between protein conformations gives deep mechanistic insight into protein behavior but is typically difficult. Unbiased molecular dynamics (MD) simulations provide one solution, but their computational expense is often prohibitive, motivating the development of enhanced sampling methods that accelerate conformational changes in a given direction, embodied in a collective variable. The accuracy of such methods is unclear for complex protein transitions, because obtaining unbiased MD data for comparison is difficult. Here, we use long-time scale, unbiased MD simulations of epidermal growth factor receptor kinase deactivation as a complex biological test case for two widely used methods-steered molecular dynamics (SMD) and the string method. We found that common collective variable choices, based on the root-mean-square deviation (RMSD) of the entire protein, prevented the methods from producing accurate paths, even in SMD simulations on the time scale of the unbiased transition. Using collective variables based on the RMSD of the region of the protein known to be important for the conformational change, however, enabled both methods to provide a more accurate description of the pathway in a fraction of the simulation time required to observe the unbiased transition. PMID:26586510

  10. Assessing the Accuracy of Two Enhanced Sampling Methods Using EGFR Kinase Transition Pathways: The Influence of Collective Variable Choice.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Shan, Yibing; Scarpazza, Daniele P; Shaw, David E

    2014-07-01

    Structurally elucidating transition pathways between protein conformations gives deep mechanistic insight into protein behavior but is typically difficult. Unbiased molecular dynamics (MD) simulations provide one solution, but their computational expense is often prohibitive, motivating the development of enhanced sampling methods that accelerate conformational changes in a given direction, embodied in a collective variable. The accuracy of such methods is unclear for complex protein transitions, because obtaining unbiased MD data for comparison is difficult. Here, we use long-time scale, unbiased MD simulations of epidermal growth factor receptor kinase deactivation as a complex biological test case for two widely used methods-steered molecular dynamics (SMD) and the string method. We found that common collective variable choices, based on the root-mean-square deviation (RMSD) of the entire protein, prevented the methods from producing accurate paths, even in SMD simulations on the time scale of the unbiased transition. Using collective variables based on the RMSD of the region of the protein known to be important for the conformational change, however, enabled both methods to provide a more accurate description of the pathway in a fraction of the simulation time required to observe the unbiased transition.

  11. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  12. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  13. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  14. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types.

  15. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  16. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  17. Current methods of assessing the accuracy of three-dimensional soft tissue facial predictions: technical and clinical considerations.

    PubMed

    Khambay, B; Ullah, R

    2015-01-01

    Since the introduction of three-dimensional (3D) orthognathic planning software, studies have reported on their predictive ability. The aim of this study was to highlight the limitations of the current methods of analysis. The predicted 3D soft tissue image was compared to the postoperative soft tissue. For the full face, the maximum and 95th and 90th percentiles, the percentage of 3D mesh points ≤ 2 mm, and the root mean square (RMS) error, were calculated. For specific anatomical regions, the percentage of 3D mesh points ≤ 2 mm and the distance between the two meshes at 10 landmarks were determined. For the 95th and 90th percentiles, the maximum difference ranged from 7.7 mm to 2.2 mm and from 3.7 mm to 1.5 mm, respectively. The absolute mean distance ranged from 0.98 mm to 0.56 mm and from 0.91 mm to 0.50 mm, respectively. The percentage of mesh with ≤ 2 mm for the full face was 94.4-85.2% and 100-31.3% for anatomical regions. The RMS error ranged from 2.49 mm to 0.94 mm. The majority of mean linear distances between the surfaces were ≤ 0.8 mm, but increased for the mean absolute distance. At present the use of specific anatomical regions is more clinically meaningful than the full face. It is crucial to understand these and adopt a protocol for conducting such studies.

  18. Evaluating the Accuracy of Pharmacy Students' Self-Assessment Skills

    PubMed Central

    Gregory, Paul A. M.

    2007-01-01

    Objectives To evaluate the accuracy of self-assessment skills of senior-level bachelor of science pharmacy students. Methods A method proposed by Kruger and Dunning involving comparisons of pharmacy students' self-assessment with weighted average assessments of peers, standardized patients, and pharmacist-instructors was used. Results Eighty students participated in the study. Differences between self-assessment and external assessments were found across all performance quartiles. These differences were particularly large and significant in the third and fourth (lowest) quartiles and particularly marked in the areas of empathy, and logic/focus/coherence of interviewing. Conclusions The quality and accuracy of pharmacy students' self-assessment skills were not as strong as expected, particularly given recent efforts to include self-assessment in the curriculum. Further work is necessary to ensure this important practice competency and life skill is at the level expected for professional practice and continuous professional development. PMID:17998986

  19. Accuracy of the Kato-Katz method and formalin-ether concentration technique for the diagnosis of Clonorchis sinensis, and implication for assessing drug efficacy

    PubMed Central

    2013-01-01

    Background Clonorchiasis is a chronic neglected disease caused by a liver fluke, Clonorchis sinensis. Chemotherapy is the mainstay of control and treatment efficacy is usually determined by microscopic examination of fecal samples. We assessed the diagnostic accuracy of the Kato-Katz method and the formalin-ether concentration technique (FECT) for C. sinensis diagnosis, and studied the effect of diagnostic approach on drug efficacy evaluation. Methods Overall, 74 individuals aged ≥18 years with a parasitological confirmed C. sinensis infection at baseline were re-examined 3 weeks after treatment. Before and after treatment, two stool samples were obtained from each participant and each sample was subjected to triplicate Kato-Katz thick smears and a single FECT examination. Results Thirty-eight individuals were still positive for C. sinensis according to our diagnostic ‘gold’ standard (six Kato-Katz thick smears plus two FECT). Two FECT had a significantly lower sensitivity than six Kato-Katz thick smears (44.7% versus 92.1%; p <0.001). Examination of single Kato-Katz and single FECT considerably overestimated cure rates. Conclusions In settings where molecular diagnostic assays are absent, multiple Kato-Katz thick smears should be examined for an accurate diagnosis of C. sinensis infection and for assessing drug efficacy against this liver fluke infection. PMID:24499644

  20. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  1. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  2. The accuracy of breast volume measurement methods: A systematic review.

    PubMed

    Choppin, S B; Wheat, J S; Gee, M; Goyal, A

    2016-08-01

    Breast volume is a key metric in breast surgery and there are a number of different methods which measure it. However, a lack of knowledge regarding a method's accuracy and comparability has made it difficult to establish a clinical standard. We have performed a systematic review of the literature to examine the various techniques for measurement of breast volume and to assess their accuracy and usefulness in clinical practice. Each of the fifteen studies we identified had more than ten live participants and assessed volume measurement accuracy using a gold-standard based on the volume, or mass, of a mastectomy specimen. Many of the studies from this review report large (>200 ml) uncertainty in breast volume and many fail to assess measurement accuracy using appropriate statistical tools. Of the methods assessed, MRI scanning consistently demonstrated the highest accuracy with three studies reporting errors lower than 10% for small (250 ml), medium (500 ml) and large (1000 ml) breasts. However, as a high-cost, non-routine assessment other methods may be more appropriate. PMID:27288864

  3. Pulse oximetry: accuracy of methods of interpreting graphic summaries.

    PubMed

    Lafontaine, V M; Ducharme, F M; Brouillette, R T

    1996-02-01

    Although pulse oximetry has been used to determine the frequency and extent of hemoglobin desaturation during sleep, movement artifact can result in overestimation of desaturation unless valid desaturations can be identified accurately. Therefore, we determined the accuracy of pulmonologists' and technicians' interpretations of graphic displays of desaturation events, derived an objective method for interpreting such events, and validated the method on an independent data set. Eighty-seven randomly selected desaturation events were classified as valid (58) or artifactual (29) based on cardiorespiratory recordings (gold standard) that included pulse waveform and respiratory inductive plethysmography signals. Using oximetry recordings (test method), nine pediatric pulmonologists and three respiratory technicians ("readers") averaged 50 +/- 11% (SD) accuracy for event classification. A single variable, the pulse amplitude modulation range (PAMR) prior to desaturation, performed better in discriminating valid from artifactual events with 76% accuracy (P < 0.05). Following a seminar on oximetry and the use of the PAMR method, the readers' accuracy increased to 73 +/- 2%. In an independent set of 73 apparent desaturation events (74% valid, 26% artifactual), the PAMR method of assessing oximetry graphs yielded 82% accuracy; transcutaneous oxygen tension records confirmed a drop in oxygenation during 49 of 54 (89%) valid desaturation events. In conclusion, the most accurate method (91%) of assessing desaturation events requires recording of the pulse and respiratory waveforms. However, a practical, easy-to-use method of interpreting pulse oximetry recordings achieved 76-82% accuracy, which constitutes a significant improvement from previous subjective interpretations.

  4. Accuracy assessment of a marker-free method for registration of CT and stereo images applied in image-guided implantology: a phantom study.

    PubMed

    Mohagheghi, Saeed; Ahmadian, Alireza; Yaghoobee, Siamak

    2014-12-01

    To assess the accuracy of a proposed marker-free registration method as opposed to the conventional marker-based method using an image-guided dental system, and investigating the best configurations of anatomical landmarks for various surgical fields in a phantom study, a CT-compatible dental phantom consisting of implanted targets was used. Two marker-free registration methods were evaluated, first using dental anatomical landmarks and second, using a reference marker tool. Six implanted markers, distributed in the inner space of the phantom were used as the targets; the values of target registration error (TRE) for each target were measured and compared with the marker-based method. Then, the effects of different landmark configurations on TRE values, measured using the Parsiss IV Guided Navigation system (Parsiss, Tehran, Iran), were investigated to find the best landmark arrangement for reaching the minimum registration error in each target region. It was proved that marker-free registration can be as precise as the marker-based method. This has a great impact on image-guided implantology systems whereby the drawbacks of fiducial markers for patient and surgeon are removed. It was also shown that smaller values of TRE could be achieved by using appropriate landmark configurations and moving the center of the landmark set closer to the surgery target. Other common factors would not necessarily decrease the TRE value so the conventional rules accepted in the clinical community about the ways to reduce TRE should be adapted to the selected field of dental surgery.

  5. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  6. Accuracy Assessment in Structure from Motion 3d Reconstruction from Uav-Born Images: the Influence of the Data Processing Methods

    NASA Astrophysics Data System (ADS)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-08-01

    The evolution of Structure from Motion (SfM) techniques and their integration with the established procedures of classic stereoscopic photogrammetric survey have provided a very effective tool for the production of three-dimensional textured models. Such models are not only aesthetically pleasing but can also contain metric information, the quality of which depends on both survey type and applied processing methodologies. An open research topic in this area refers to checking attainable accuracy levels. The knowledge of such accuracy is essential, especially in the integration of models obtained through SfM with other models derived from different sensors or methods (laser scanning, classic photogrammetry ...). Accuracy checks may be conducted by either comparing SfM models against a reference one or measuring the deviation of control points identified on models and measured with classic topographic instrumentation and methodologies. This paper presents an analysis of attainable accuracy levels, according to different approaches of survey and data processing. For this purpose, a survey of the Church of San Miniato in Marcianella (Pisa, Italy), has been used. The dataset is an integration of laser scanning with terrestrial and UAV-borne photogrammetric surveys; in addition, a high precision topographic network was established for the specific purpose. In particular, laser scanning has been used for the interior and the exterior of the church, with the exclusion of the roof, while UAVs have been used for the photogrammetric survey of both roof, with horizontal strips, and façade, with vertical strips.

  7. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  8. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary.

  9. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  10. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  11. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  12. Accuracy Of Stereometry In Assessing Orthognathic Surgery

    NASA Astrophysics Data System (ADS)

    King, Geoffrey E.; Bays, R. A.

    1983-07-01

    An X-ray stereometric technique has been developed for the determination of 3-dimensional coordinates of spherical metallic markers previously implanted in monkey skulls. The accuracy of the technique is better than 0.5mm. and uses readily available demountable X-ray equipment. The technique is used to study the effects and stability of experimental orthognathic surgery.

  13. Classification, change-detection and accuracy assessment: Toward fuller automation

    NASA Astrophysics Data System (ADS)

    Podger, Nancy E.

    This research aims to automate methods for conducting change detection studies using remotely sensed images. Five major objectives were tested on two study sites, one encompassing Madison, Wisconsin, and the other Fort Hood, Texas. (Objective 1) Enhance accuracy assessments by estimating standard errors using bootstrap analysis. Bootstrap estimates of the standard errors were found to be comparable to parametric statistical estimates. Also, results show that bootstrapping can be used to evaluate the consistency of a classification process. (Objective 2) Automate the guided clustering classifier. This research shows that the guided clustering classification process can be automated while maintaining highly accurate results. Three different evaluation methods were used. (Evaluation 1) Appraised the consistency of 25 classifications produced from the automated system. The classifications differed from one another by only two to four percent. (Evaluation 2) Compared accuracies produced by the automated system to classification accuracies generated following a manual guided clustering protocol. Results: The automated system produced higher overall accuracies in 50 percent of the tests and was comparable for all but one of the remaining tests. (Evaluation 3) Assessed the time and effort required to produce accurate classifications. Results: The automated system produced classifications in less time and with less effort than the manual 'protocol' method. (Objective 3) Built a flexible, interactive software tool to aid in producing binary change masks. (Objective 4) Reduced by automation the amount of training data needed to classify the second image of a two-time-period change detection project. Locations of the training sites in 'unchanged' areas employed to classify the first image were used to identify sites where spectral information was automatically extracted from the second image. Results: The automatically generated training data produces classification accuracies

  14. Assessing accuracy of an electronic provincial medication repository

    PubMed Central

    2012-01-01

    Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH) to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time), while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time). 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy. PMID:22621690

  15. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  16. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  17. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  18. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  19. Contemporary flow meters: an assessment of their accuracy and reliability.

    PubMed

    Christmas, T J; Chapple, C R; Rickards, D; Milroy, E J; Turner-Warwick, R T

    1989-05-01

    The accuracy, reliability and cost effectiveness of 5 currently marketed flow meters have been assessed. The mechanics of each meter is briefly described in relation to its accuracy and robustness. The merits and faults of the meters are discussed and the important features of flow measurements that need to be taken into account when making diagnostic interpretations are emphasised.

  20. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  1. Development and optimization of a method for the determination of Cylindrospermopsin from strains of Aphanizomenon cultures: intra-laboratory assessment of its accuracy by using validation standards.

    PubMed

    Guzmán-Guillén, Remedios; Prieto Ortega, Ana I; Moreno, I; González, Gustavo; Soria-Díaz, M Eugenia; Vasconcelos, Vitor; Cameán, Ana M

    2012-10-15

    The occurrence of cyanobacterial blooms in aquatic environments is increasing in many regions of the world due to progressive eutrophication of water bodies. Because of the production of toxins such as Cylindrospermopsin (CYN), contamination of water with cyanobacteria is a serious health problem around the world. Therefore it is necessary to develop and validate analytical methods that allow us to quantify CYN in real samples in order to alert the public of this toxin. In this work, an analytical method has been developed an optimized for the determination of CYN from Aphanizomenon ovalisporum cultures. The analytical procedure is based on solvent extraction followed by a purification step with graphitized cartridges and CYN quantification by LC-MS/MS. The extraction and purification steps were optimized using a two-level full factorial design with replications. A suitable and practical procedure for assessing the trueness and precision of the proposed method has been applied by using validation standards. The method has been suitably validated: the regression equation was calculated from standards prepared in extracts from lyophilized M. aeruginosa PCC7820 (r(2)≥0.9999) and the linear range covered is from 5 to 500 μg CYN/L, equivalent to 0.18-18.00 μg CYN/g dry weight lyophilized cells. Limits of detection and quantification were 0.04 and 0.15 μg CYN/g, respectively, the recovery range (%) oscillated between 83 and 94% and intermediate precision (RSD %) values from 5.6 to 11.0%. Moreover, the present method showed to be robust for the three factors considered: the batch of the graphitized carbon cartridges, the flow rate of the sample through the cartridge, and the final re-dissolved water volume after SPE treatment, which permits its validation. The validated method has been applied to different lyophilized cultures of A. ovalisporum (LEGE X-001) to evaluate CYN content. This procedure can be used for determining CYN in lyophilized natural blooms samples

  2. Analyses and Comparison of Accuracy of Different Genotype Imputation Methods

    PubMed Central

    Pei, Yu-Fang; Li, Jian; Zhang, Lei; Papasian, Christopher J.; Deng, Hong-Wen

    2008-01-01

    The power of genetic association analyses is often compromised by missing genotypic data which contributes to lack of significant findings, e.g., in in silico replication studies. One solution is to impute untyped SNPs from typed flanking markers, based on known linkage disequilibrium (LD) relationships. Several imputation methods are available and their usefulness in association studies has been demonstrated, but factors affecting their relative performance in accuracy have not been systematically investigated. Therefore, we investigated and compared the performance of five popular genotype imputation methods, MACH, IMPUTE, fastPHASE, PLINK and Beagle, to assess and compare the effects of factors that affect imputation accuracy rates (ARs). Our results showed that a stronger LD and a lower MAF for an untyped marker produced better ARs for all the five methods. We also observed that a greater number of haplotypes in the reference sample resulted in higher ARs for MACH, IMPUTE, PLINK and Beagle, but had little influence on the ARs for fastPHASE. In general, MACH and IMPUTE produced similar results and these two methods consistently outperformed fastPHASE, PLINK and Beagle. Our study is helpful in guiding application of imputation methods in association analyses when genotype data are missing. PMID:18958166

  3. Assessment of optical localizer accuracy for computer aided surgery systems.

    PubMed

    Elfring, Robert; de la Fuente, Matías; Radermacher, Klaus

    2010-01-01

    The technology for localization of surgical tools with respect to the patient's reference coordinate system in three to six degrees of freedom is one of the key components in computer aided surgery. Several tracking methods are available, of which optical tracking is the most widespread in clinical use. Optical tracking technology has proven to be a reliable method for intra-operative position and orientation acquisition in many clinical applications; however, the accuracy of such localizers is still a topic of discussion. In this paper, the accuracy of three optical localizer systems, the NDI Polaris P4, the NDI Polaris Spectra (in active and passive mode) and the Stryker Navigation System II camera, is assessed and compared critically. Static tests revealed that only the Polaris P4 shows significant warm-up behavior, with a significant shift of accuracy being observed within 42 minutes of being switched on. Furthermore, the intrinsic localizer accuracy was determined for single markers as well as for tools using a volumetric measurement protocol on a coordinate measurement machine. To determine the relative distance error within the measurement volume, the Length Measurement Error (LME) was determined at 35 test lengths. As accuracy depends strongly on the marker configuration employed, the error to be expected in typical clinical setups was estimated in a simulation for different tool configurations. The two active localizer systems, the Stryker Navigation System II camera and the Polaris Spectra (active mode), showed the best results, with trueness values (mean +/- standard deviation) of 0.058 +/- 0.033 mm and 0.089 +/- 0.061 mm, respectively. The Polaris Spectra (passive mode) showed a trueness of 0.170 +/- 0.090 mm, and the Polaris P4 showed the lowest trueness at 0.272 +/- 0.394 mm with a higher number of outliers than for the other cameras. The simulation of the different tool configurations in a typical clinical setup revealed that the tracking error can

  4. Accuracy Assessment in rainfall upscaling in multiple time scales

    NASA Astrophysics Data System (ADS)

    Yu, H.; Wang, C.; Lin, Y.

    2008-12-01

    Long-term hydrologic parameters, e.g. annual precipitations, are usually used to represent the general hydrologic characteristics in a region. Recently, the analysis of the impact of climate change to hydrological patterns primarily relies on the measurement and/or the estimations in long time scales, e.g. year. Under the general condition of the prevalence of short-term measurements, therefore, it is important to understand the accuracy of upscaling for the long-term estimations of hydrologic parameters. This study applies spatiotemporal geostatistical method to analyze and discuss the accuracy of precipitation upscaling in Taiwan under the different time scales, and also quantifies the uncertainty in the upscaled long-term precipitations. In this study, two space-time upscaling approaches developed by Bayesian Maximum Entropy method (BME) are presented 1) UM1: data aggregation followed by BME estimation and 2) UM2: BME estimation followed by aggregation. The investigation and comparison are also implemented to assess the performance of the rainfall estimations in multiple time scales in Taiwan by the two upscaling. Keywords: upscaling, geostatistics, BME, uncertainty analysis

  5. Accuracies of diagnostic methods for acute appendicitis.

    PubMed

    Park, Jong Seob; Jeong, Jin Ho; Lee, Jong In; Lee, Jong Hoon; Park, Jea Kun; Moon, Hyoun Jong

    2013-01-01

    The objectives were to evaluate the effectiveness of ultrasonography, computed tomography, and physical examination for diagnosing acute appendicitis with analyzing their accuracies and negative appendectomy rates in a clinical rather than research setting. A total of 2763 subjects were enrolled. Sensitivity, specificity, positive predictive value, and negative predictive value and negative appendectomy rate for ultrasonography, computed tomography, and physical examination were calculated. Confirmed positive acute appendicitis was defined based on pathologic findings, and confirmed negative acute appendicitis was defined by pathologic findings as well as on clinical follow-up. Sensitivity, specificity, positive predictive value, and negative predictive value for ultrasonography were 99.1, 91.7, 96.5, and 97.7 per cent, respectively; for computed tomography, 96.4, 95.4, 95.6, and 96.3 per cent, respectively; and for physical examination, 99.0, 76.1, 88.1, and 97.6 per cent, respectively. The negative appendectomy rate was 5.8 per cent (5.2% in the ultrasonography group, 4.3% in the computed tomography group, and 12.2% in the physical examination group). Ultrasonography/computed tomography should be performed routinely for diagnosis of acute appendicitis. However, in view of its advantages, ultrasonography should be performed first. Also, if the result of a physical examination is negative, imaging studies after physical examination can be unnecessary.

  6. Accuracy Analysis of the PIC Method

    NASA Astrophysics Data System (ADS)

    Verboncoeur, J. P.; Cartwright, K. L.

    2000-10-01

    The discretization errors for many steps of the classical Particle-in-Cell (PIC) model have been well-studied (C. K. Birdsall and A. B. Langdon, Plasma Physics via Computer Simulation, McGraw-Hill, New York, NY (1985).) (R. W. Hockney and J. W. Eastwood, Computer Simulation Using Particles, McGraw-Hill, New York, NY (1981).). In this work, the errors in the interpolation algorithms, which provide the connection between continuum particles and discrete fields, are described in greater detail. In addition, the coupling of errors between steps in the method is derived. The analysis is carried out for both electrostatic and electromagnetic PIC models, and the results are demonstrated using a bounded one-dimensional electrostatic PIC code (J. P. Verboncoeur et al., J. Comput. Phys. 104, 321-328 (1993).), as well as a bounded two-dimensional electromagnetic PIC code (J. P. Verboncoeur et al., Comp. Phys. Comm. 87, 199-211 (1995).).

  7. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  8. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  9. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  10. Bringing everyday mind reading into everyday life: assessing empathic accuracy with daily diary data.

    PubMed

    Howland, Maryhope; Rafaeli, Eshkol

    2010-10-01

    Individual differences in empathic accuracy (EA) can be assessed using daily diary methods as a complement to more commonly used lab-based behavioral observations. Using electronic dyadic diaries, we distinguished among elements of EA (i.e., accuracy in levels, scatter, and pattern, regarding both positive and negative moods) and examined them as phenomena at both the day and the person level. In a 3-week diary study of cohabiting partners, we found support for differentiating these elements. The proposed indices reflect differing aspects of accuracy, with considerable similarity among same-valenced accuracy indices. Overall there was greater accuracy regarding negative target moods than positive target moods. These methods and findings take the phenomenon of "everyday mindreading" (Ickes, 2003) into everyday life. We conclude by discussing empathic accuracies as a family of capacities for, or tendencies toward, accurate interpersonal sensitivity. Members of this family may have distinct associations with the perceiver's, target's, and relationship's well-being.

  11. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  12. Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance

    ERIC Educational Resources Information Center

    Hall, Tina J.; Hicklin, Lori K.; French, Karen E.

    2015-01-01

    Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…

  13. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  14. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  15. An improved method for determining force balance calibration accuracy

    NASA Astrophysics Data System (ADS)

    Ferris, Alice T.

    The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.

  16. Standardized accuracy assessment of the calypso wireless transponder tracking system.

    PubMed

    Franz, A M; Schmitt, D; Seitel, A; Chatrasingh, M; Echner, G; Oelfke, U; Nill, S; Birkfellner, W; Maier-Hein, L

    2014-11-21

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  17. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  18. Standardized accuracy assessment of the calypso wireless transponder tracking system.

    PubMed

    Franz, A M; Schmitt, D; Seitel, A; Chatrasingh, M; Echner, G; Oelfke, U; Nill, S; Birkfellner, W; Maier-Hein, L

    2014-11-21

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high. PMID:25332308

  19. Accuracy Assessment of Digital Elevation Models Using GPS

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Talaat, Ashraf; Farrag, Farrag A.

    2008-01-01

    A Digital Elevation Model (DEM) is a digital representation of ground surface topography or terrain with different accuracies for different application fields. DEM have been applied to a wide range of civil engineering and military planning tasks. DEM is obtained using a number of techniques such as photogrammetry, digitizing, laser scanning, radar interferometry, classical survey and GPS techniques. This paper presents an assessment study of DEM using GPS (Stop&Go) and kinematic techniques comparing with classical survey. The results show that a DEM generated from (Stop&Go) GPS technique has the highest accuracy with a RMS error of 9.70 cm. The RMS error of DEM derived by kinematic GPS is 12.00 cm.

  20. Accuracy of activPAL Self-Attachment Methods

    ERIC Educational Resources Information Center

    Kringen, Nina L.; Healy, Genevieve N.; Winkler, Elisabeth A. H.; Clark, Bronwyn K.

    2016-01-01

    This study examined the accuracy of self-attachment of the activPAL activity monitor. A convenience sample of 50 participants self-attached the monitor after being presented with written material only (WMO) and then written and video (WV) instructions; and completed a questionnaire regarding the acceptability of the instructional methods.…

  1. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  2. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  3. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  4. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  5. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  6. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  7. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  8. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  9. Assessment of ambulatory blood pressure recorders: accuracy and clinical performance.

    PubMed

    White, W B

    1991-06-01

    There are now more than ten different manufacturers of non-invasive, portable blood pressure monitors in North America, Europe, and Japan. These ambulatory blood pressure recorders measure blood pressure by either auscultatory or oscillometric methodology. Technologic advances in the recorders have resulted in reduction in monitor size, reduction in or absence of motor noise during cuff inflation, ability to program the recorder without an external computer system, and enhanced precision. Recently, there has been concern that more structured validation protocols have not been implemented prior to the widespread marking of ambulatory blood pressure recorders. There is a need for proper assessment of recorders prior to use in clinical research or practice. Data on several existing recorders suggest that while most are reasonably accurate during resting measurements, many lose this accuracy during motion, and clinical performance may vary among the monitors. Validation studies of ambulatory recorders should include comparison with mercury column and intra-arterial determinations, resting and motion measurements, and assessment of clinical performance in hypertensive patients. PMID:1893652

  10. Accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Ye, Dong; Che, Rensheng

    2009-11-01

    There is a new method that the rocket nozzle 3D motion is measured by a motion tracking system based on the passive optical markers. However, an important issue is required to resolve-how to assess the accuracy of rocket nozzle motion test. Therefore, calibration equipment is designed and manufactured for generating the truth of nozzle model motion such as translation, angle, velocity, angular velocity, etc. It consists of a base, a lifting platform, a rotary table and a rocket nozzle model with precise geometry size. The nozzle model associated with the markers is installed on the rotary table, which can translate or rotate at the known velocity. The general accuracy of rocket nozzle motion test is evaluated by comparing the truth value with the static and dynamic test data. This paper puts emphasis on accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment. By substituting measured value of the error source into error model, the pointing error reaches less than 0.005deg, rotation center position error reaches 0.08mm, and the rate stability is less than 10-3. The calibration equipment accuracy is much higher than the accuracy of nozzle motion test system, thus the former can be used to assess and calibrate the later.

  11. Factors Governing the Accuracy of Subvisible Particle Counting Methods.

    PubMed

    Ríos Quiroz, Anacelia; Finkler, Christof; Huwyler, Joerg; Mahler, Hanns-Christian; Schmidt, Roland; Koulov, Atanas V

    2016-07-01

    A number of new techniques for subvisible particle characterization in biotechnological products have emerged in the last decade. Although the pharmaceutical community is actively using them, the current knowledge about the analytical performance of some of these tools is still inadequate to support their routine use in the development of biopharmaceuticals (especially in the case of submicron methods). With the aim of increasing this knowledge and our understanding of the most prominent techniques for subvisible particle characterization, this study reports the results of a systematic evaluation of their accuracy. Our results showed a marked overcounting effect especially for low concentrated samples and particles fragile in nature. Furthermore, we established the relative sample size distribution as the most important contributor to an instrument's performance in accuracy counting. The smaller the representation of a particle size within a solution, the more difficulty the instruments had in providing an accurate count. These findings correlate with a recent study examining the principal factors influencing the precision of the subvisible particle measurements. A more thorough understanding of the capabilities of the different particle characterization methods provided here will help guide the application of these methods and the interpretation of results in subvisible particle characterization studies.

  12. Factors Governing the Accuracy of Subvisible Particle Counting Methods.

    PubMed

    Ríos Quiroz, Anacelia; Finkler, Christof; Huwyler, Joerg; Mahler, Hanns-Christian; Schmidt, Roland; Koulov, Atanas V

    2016-07-01

    A number of new techniques for subvisible particle characterization in biotechnological products have emerged in the last decade. Although the pharmaceutical community is actively using them, the current knowledge about the analytical performance of some of these tools is still inadequate to support their routine use in the development of biopharmaceuticals (especially in the case of submicron methods). With the aim of increasing this knowledge and our understanding of the most prominent techniques for subvisible particle characterization, this study reports the results of a systematic evaluation of their accuracy. Our results showed a marked overcounting effect especially for low concentrated samples and particles fragile in nature. Furthermore, we established the relative sample size distribution as the most important contributor to an instrument's performance in accuracy counting. The smaller the representation of a particle size within a solution, the more difficulty the instruments had in providing an accurate count. These findings correlate with a recent study examining the principal factors influencing the precision of the subvisible particle measurements. A more thorough understanding of the capabilities of the different particle characterization methods provided here will help guide the application of these methods and the interpretation of results in subvisible particle characterization studies. PMID:27287519

  13. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  14. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  15. Accuracy of Four Tooth Size Prediction Methods on Malay Population

    PubMed Central

    Mahmoud, Belal Khaled; Abu Asab, Saifeddin Hamed I.; Taib, Haslina

    2012-01-01

    Objective. To examine the accuracy of Moyers 50%, Tanaka and Johnston, Ling and Wong and Jaroontham and Godfrey methods in predicting the mesio-distal crown width of the permanent canines and premolars (C + P1 + P2) in Malay population. Materials and Methods. The study models of 240 Malay children (120 males and 120 females) aged 14 to 18 years, all free of any signs of dental pathology or anomalies, were measured using a digital caliper accurate to 0.01 mm. The predicted widths (C + P1 + P2) in both arches derived from the tested prediction equations were compared with the actual measured widths. Results. Moyers and Tanaka and Johnston methods showed significant difference between the actual and predicted widths of (C + P1 + P2) for both sexes. Ling and Wong method also showed statistically significant difference for males, however, there was no significant difference for females. Jaroontham and Godfrey method showed statistical significant difference for females, but the male values did not show any significant difference. Conclusion. For male Malay, the method proposed by Jaroontham and Godfrey for male Thai proved to be highly accurate. For female Malay, the method proposed by Ling and Wong for southern Chinese females proved to be highly accurate. PMID:23209918

  16. Radiative accuracy assessment of CrIS upper level channels using COSMIC RO data

    NASA Astrophysics Data System (ADS)

    Qi, C.; Weng, F.; Han, Y.; Lin, L.; Chen, Y.; Wang, L.

    2012-12-01

    The Cross-track Infrared Sounder(CrIS) onboard Suomi National Polar-orbiting Partnership(NPP) satellite is designed to provide high vertical resolution information on the atmosphere's three-dimensional structure of temperature and water vapor. There are much work has been done to verify the observation accuracy of CrIS since its launch date of Oct. 28, 2011, such as SNO cross comparison with other hyper-spectral infrared instruments and forward simulation comparison using radiative transfer model based on numerical prediction background profiles. Radio occultation technique can provide profiles of the Earth's ionosphere and neutral atmosphere with high accuracy, high vertical resolution and global coverage. It has advantages of all-weather capability, low expense, long-term stability etc. Assessing CrIS radiative calibration accuracy was conducted by comparison between observation and Line-by-line simulation using COSMIC RO data. The main process technique include : (a) COSMIC RO data downloading and collocation with CrIS measurements through weighting function (wf) peak altitude dependent collocation method; (b) High spectral resolution of Line-by-line radiance simulation using collocated COSMIC RO profiles ; (c) Generation of CrIS channel radiance by FFT transform method; (d): Bias analysis . This absolute calibration accuracy assessing method verified a 0.3K around bias error of CrIS measurements.

  17. An assessment of reservoir storage change accuracy from SWOT

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Moller, Delwyn; Lettenmaier, Dennis

    2013-04-01

    The anticipated Surface Water and Ocean Topography (SWOT) satellite mission will provide water surface height and areal extent measurements for terrestrial water bodies at an unprecedented accuracy with essentially global coverage with a 22-day repeat cycle. These measurements will provide a unique opportunity to observe storage changes in naturally occurring lakes, as well as manmade reservoirs. Given political constraints on the sharing of water information, international data bases of reservoir characteristics, such as the Global Reservoir and Dam Database, are limited to the largest reservoirs for which countries have voluntarily provided information. Impressive efforts have been made to combine currently available altimetry data with satellite-based imagery of water surface extent; however, these data sets are limited to large reservoirs located on an altimeter's flight track. SWOT's global coverage and simultaneous measurement of height and water surface extent remove, in large part, the constraint of location relative to flight path. Previous studies based on Arctic lakes suggest that SWOT will be able to provide a noisy, but meaningful, storage change signal for lakes as small as 250 m x 250 m. Here, we assess the accuracy of monthly storage change estimates over 10 reservoirs in the U.S. and consider the plausibility of estimating total storage change. Published maps of reservoir bathymetry were combined with a historical time series of daily storage to produce daily time series of maps of water surface elevation. Next, these time series were then sampled based on realistic SWOT orbital parameters and noise characteristics to create a time series of synthetic SWOT observations of water surface elevation and extent for each reservoir. We then plotted area versus elevation for the true values and for the synthetic SWOT observations. For each reservoir, a curve was fit to the synthetic SWOT observations, and its integral was used to estimate total storage

  18. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  19. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  20. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  1. New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting, October 28, 2011.

    PubMed

    Walsh, John; Roberts, Ruth; Vigersky, Robert A; Schwartz, Frank

    2012-03-01

    Glucose meters (GMs) are routinely used for self-monitoring of blood glucose by patients and for point-of-care glucose monitoring by health care providers in outpatient and inpatient settings. Although widely assumed to be accurate, numerous reports of inaccuracies with resulting morbidity and mortality have been noted. Insulin dosing errors based on inaccurate GMs are most critical. On October 28, 2011, the Diabetes Technology Society invited 45 diabetes technology clinicians who were attending the 2011 Diabetes Technology Meeting to participate in a closed-door meeting entitled New Criteria for Assessing the Accuracy of Blood Glucose Monitors. This report reflects the opinions of most of the attendees of that meeting. The Food and Drug Administration (FDA), the public, and several medical societies are currently in dialogue to establish a new standard for GM accuracy. This update to the FDA standard is driven by improved meter accuracy, technological advances (pumps, bolus calculators, continuous glucose monitors, and insulin pens), reports of hospital and outpatient deaths, consumer complaints about inaccuracy, and research studies showing that several approved GMs failed to meet FDA or International Organization for Standardization standards in postapproval testing. These circumstances mandate a set of new GM standards that appropriately match the GMs' analytical accuracy to the clinical accuracy required for their intended use, as well as ensuring their ongoing accuracy following approval. The attendees of the New Criteria for Assessing the Accuracy of Blood Glucose Monitors meeting proposed a graduated standard and other methods to improve GM performance, which are discussed in this meeting report.

  2. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  3. Diagnostic accuracy of existing methods for identifying diabetic foot ulcers from inpatient and outpatient datasets

    PubMed Central

    2010-01-01

    Background As the number of persons with diabetes is projected to double in the next 25 years in the US, an accurate method of identifying diabetic foot ulcers in population-based data sources are ever more important for disease surveillance and public health purposes. The objectives of this study are to evaluate the accuracy of existing methods and to propose a new method. Methods Four existing methods were used to identify all patients diagnosed with a foot ulcer in a Department of Veterans Affairs (VA) hospital from the inpatient and outpatient datasets for 2003. Their electronic medical records were reviewed to verify whether the medical records positively indicate presence of a diabetic foot ulcer in diagnoses, medical assessments, or consults. For each method, five measures of accuracy and agreement were evaluated using data from medical records as the gold standard. Results Our medical record reviews show that all methods had sensitivity > 92% but their specificity varied substantially between 74% and 91%. A method used in Harrington et al. (2004) was the most accurate with 94% sensitivity and 91% specificity and produced an annual prevalence of 3.3% among VA users with diabetes nationwide. A new and simpler method consisting of two codes (707.1× and 707.9) shows an equally good accuracy with 93% sensitivity and 91% specificity and 3.1% prevalence. Conclusions Our results indicate that the Harrington and New methods are highly comparable and accurate. We recommend the Harrington method for its accuracy and the New method for its simplicity and comparable accuracy. PMID:21106076

  4. A High-accuracy Micro-deformation Measurement Method

    NASA Astrophysics Data System (ADS)

    Jiang, Li

    2016-07-01

    The requirement for ever-increasing-resolution space cameras drives focal length and diameter of optical lenses be increasing. High-frequency vibration in the process of launching and complex environmental conditions of the outer space generate micro deformation in components of space cameras. As a result, images from the space cameras are blurred. Therefore, it is necessary to measure the micro deformations in components of space cameras in various experiment conditions. This paper presents a high-accuracy micro deformation measurement method. The method is implemented as follows: (1) fix Tungsten-steel balls onto a space camera being measured and measure the coordinate for each ball under the standard condition; (2) simulate high-frequency vibrations and environmental conditions like the outer space to measure coordinates for each ball under each combination of test conditions; and (3) compute the deviation of a coordinate of a ball under a test condition combination from the coordinate of the ball under the standard condition and the deviation is the micro deformation of the space camera component associated with the ball. This method was applied to micro deformation measurement for space cameras of different models. Measurement data for these space cameras validated the proposed method.

  5. The ADI-FDTD method for high accuracy electrophysics applications

    NASA Astrophysics Data System (ADS)

    Haeri Kermani, Mohammad

    The Finite-Difference Time-Domain (FDTD) is a dependable method to simulate a wide range of problems from acoustics, to electromagnetics, and to photonics, amongst others. The execution time of an FDTD simulation is inversely proportional to the time-step size. Since the FDTD method is explicit, its time-step size is limited by the well-known Courant-Friedrich-Levy (CFL) stability limit. The CFL stability limit can render the simulation inefficient for very fine structures. The Alternating Direction Implicit FDTD (ADI-FDTD) method has been introduced as an unconditionally stable implicit method. Numerous works have shown that the ADI-FDTD method is stable even when the CFL stability limit is exceeded. Therefore, the ADI-FDTD method can be considered an efficient method for special classes of problems with very fine structures or high gradient fields. Whenever the ADI-FDTD method is used to simulate open-region radiation or scattering problems, the implementation of a mesh-truncation scheme or absorbing boundary condition becomes an integral part of the simulation. These truncation techniques represent, in essence, differential operators that are discretized using a distinct differencing scheme which can potentially affect the stability of the scheme used for the interior region. In this work, we show that the ADI-FDTD method can be rendered unstable when higher-order mesh truncation techniques such as Higdon's Absorbing Boundary Condition (ABC) or Complementary Derivatives Method (COM) are used. When having large field gradients within a limited volume, a non-uniform grid can reduce the computational domain and, therefore, it decreases the computational cost of the FDTD method. However, for high-accuracy problems, different grid sizes increase the truncation error at the boundary of domains having different grid sizes. To address this problem, we introduce the Complementary Derivatives Method (CDM), a second-order accurate interpolation scheme. The CDM theory is

  6. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  7. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  8. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  9. Accuracy of peak VO2 assessments in career firefighters

    PubMed Central

    2011-01-01

    Background Sudden cardiac death is the leading cause of on-duty death in United States firefighters. Accurately assessing cardiopulmonary capacity is critical to preventing, or reducing, cardiovascular events in this population. Methods A total of 83 male firefighters performed Wellness-Fitness Initiative (WFI) maximal exercise treadmill tests and direct peak VO2 assessments to volitional fatigue. Of the 83, 63 completed WFI sub-maximal exercise treadmill tests for comparison to directly measured peak VO2 and historical estimations. Results Maximal heart rates were overestimated by the traditional 220-age equation by about 5 beats per minute (p < .001). Peak VO2 was overestimated by the WFI maximal exercise treadmill and the historical WFI sub-maximal estimation by ~ 1MET and ~ 2 METs, respectively (p < 0.001). The revised 2008 WFI sub-maximal treadmill estimation was found to accurately estimate peak VO2 when compared to directly measured peak VO2. Conclusion Accurate assessment of cardiopulmonary capacity is critical in determining appropriate duty assignments, and identification of potential cardiovascular problems, for firefighters. Estimation of cardiopulmonary fitness improves using the revised 2008 WFI sub-maximal equation. PMID:21943154

  10. The effect of different Global Navigation Satellite System methods on positioning accuracy in elite alpine skiing.

    PubMed

    Gilgien, Matthias; Spörri, Jörg; Limpach, Philippe; Geiger, Alain; Müller, Erich

    2014-10-03

    In sport science, Global Navigation Satellite Systems (GNSS) are frequently applied to capture athletes' position, velocity and acceleration. Application of GNSS includes a large range of different GNSS technologies and methods. To date no study has comprehensively compared the different GNSS methods applied. Therefore, the aim of the current study was to investigate the effect of differential and non-differential solutions, different satellite systems and different GNSS signal frequencies on position accuracy. Twelve alpine ski racers were equipped with high-end GNSS devices while performing runs on a giant slalom course. The skiers' GNSS antenna positions were calculated in three satellite signal obstruction conditions using five different GNSS methods. The GNSS antenna positions were compared to a video-based photogrammetric reference system over one turn and against the most valid GNSS method over the entire run. Furthermore, the time for acquisitioning differential GNSS solutions was assessed for four differential methods. The only GNSS method that consistently yielded sub-decimetre position accuracy in typical alpine skiing conditions was a differential method using American (GPS) and Russian (GLONASS) satellite systems and the satellite signal frequencies L1 and L2. Under conditions of minimal satellite signal obstruction, valid results were also achieved when either the satellite system GLONASS or the frequency L2 was dropped from the best configuration. All other methods failed to fulfill the accuracy requirements needed to detect relevant differences in the kinematics of alpine skiers, even in conditions favorable for GNSS measurements. The methods with good positioning accuracy had also the shortest times to compute differential solutions. This paper highlights the importance to choose appropriate methods to meet the accuracy requirements for sport applications.

  11. The Effect of Different Global Navigation Satellite System Methods on Positioning Accuracy in Elite Alpine Skiing

    PubMed Central

    Gilgien, Matthias; Spörri, Jörg; Limpach, Philippe; Geiger, Alain; Müller, Erich

    2014-01-01

    In sport science, Global Navigation Satellite Systems (GNSS) are frequently applied to capture athletes' position, velocity and acceleration. Application of GNSS includes a large range of different GNSS technologies and methods. To date no study has comprehensively compared the different GNSS methods applied. Therefore, the aim of the current study was to investigate the effect of differential and non-differential solutions, different satellite systems and different GNSS signal frequencies on position accuracy. Twelve alpine ski racers were equipped with high-end GNSS devices while performing runs on a giant slalom course. The skiers' GNSS antenna positions were calculated in three satellite signal obstruction conditions using five different GNSS methods. The GNSS antenna positions were compared to a video-based photogrammetric reference system over one turn and against the most valid GNSS method over the entire run. Furthermore, the time for acquisitioning differential GNSS solutions was assessed for four differential methods. The only GNSS method that consistently yielded sub-decimetre position accuracy in typical alpine skiing conditions was a differential method using American (GPS) and Russian (GLONASS) satellite systems and the satellite signal frequencies L1 and L2. Under conditions of minimal satellite signal obstruction, valid results were also achieved when either the satellite system GLONASS or the frequency L2 was dropped from the best configuration. All other methods failed to fulfill the accuracy requirements needed to detect relevant differences in the kinematics of alpine skiers, even in conditions favorable for GNSS measurements. The methods with good positioning accuracy had also the shortest times to compute differential solutions. This paper highlights the importance to choose appropriate methods to meet the accuracy requirements for sport applications. PMID:25285461

  12. Accuracy Assessment of Response Surface Approximations for Supersonic Turbine Design

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Papila, Melih; Shyy, Wei; Haftka, Raphael T.; FitzCoy, Norman

    2001-01-01

    There is a growing trend to employ CFD tools to supply the necessary information for design optimization of fluid dynamics components/systems. Such results are prone to uncertainties due to reasons including discretization. errors, incomplete convergence of computational procedures, and errors associated with physical models such as turbulence closures. Based on this type of information, gradient-based optimization algorithms often suffer from the noisy calculations, which can seriously compromise the outcome. Similar problems arise from the experimental measurements. Global optimization techniques, such as those based on the response surface (RS) concept are becoming popular in part because they can overcome some of these barriers. However, there are also fundamental issues related to such global optimization technique such as RS. For example, in high dimensional design spaces, typically only a small number of function evaluations are available due to computational and experimental costs. On the other hand, complex features of the design variables do not allow one to model the global characteristics of the design space with simple quadratic polynomials. Consequently a main challenge is to reduce the size of the region where we fit the RS, or make it more accurate in the regions where the optimum is likely to reside. Response Surface techniques using either polynomials or and Neural Network (NN) methods offer designers alternatives to conduct design optimization. The RS technique employs statistical and numerical techniques to establish the relationship between design variables and objective/constraint functions, typically using polynomials. In this study, we aim at addressing issues related to the following questions: (1) How to identify outliers associated with a given RS representation and improve the RS model via appropriate treatments? (2) How to focus on selected design data so that RS can give better performance in regions critical to design optimization? (3

  13. Assessing the accuracy and reproducibility of modality independent elastography in a murine model of breast cancer

    PubMed Central

    Weis, Jared A.; Flint, Katelyn M.; Sanchez, Violeta; Yankeelov, Thomas E.; Miga, Michael I.

    2015-01-01

    Abstract. Cancer progression has been linked to mechanics. Therefore, there has been recent interest in developing noninvasive imaging tools for cancer assessment that are sensitive to changes in tissue mechanical properties. We have developed one such method, modality independent elastography (MIE), that estimates the relative elastic properties of tissue by fitting anatomical image volumes acquired before and after the application of compression to biomechanical models. The aim of this study was to assess the accuracy and reproducibility of the method using phantoms and a murine breast cancer model. Magnetic resonance imaging data were acquired, and the MIE method was used to estimate relative volumetric stiffness. Accuracy was assessed using phantom data by comparing to gold-standard mechanical testing of elasticity ratios. Validation error was <12%. Reproducibility analysis was performed on animal data, and within-subject coefficients of variation ranged from 2 to 13% at the bulk level and 32% at the voxel level. To our knowledge, this is the first study to assess the reproducibility of an elasticity imaging metric in a preclinical cancer model. Our results suggest that the MIE method can reproducibly generate accurate estimates of the relative mechanical stiffness and provide guidance on the degree of change needed in order to declare biological changes rather than experimental error in future therapeutic studies. PMID:26158120

  14. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  15. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  16. New High-Accuracy Methods for Automatically Detecting & Tracking CMEs

    NASA Astrophysics Data System (ADS)

    Byrne, Jason; Morgan, H.; Habbal, S. R.

    2012-05-01

    With the large amounts of CME image data available from the SOHO and STEREO coronagraphs, manual cataloguing of events can be tedious and subject to user bias. Therefore automated catalogues, such as CACTus and SEEDS, have been developed in an effort to produce a robust method of detection and analysis of events. Here we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of previous methods. It works by first employing a dynamic CME separation technique to remove the static background, and then characterizing CMEs via a multiscale edge-detection algorithm. This allows the inherent structure of the CMEs to be revealed in each image, which is usually prone to spatiotemporal crosstalk as a result of traditional image-differencing techniques. Thus the kinematic and morphological information on each event is resolved with higher accuracy than previous catalogues, revealing CME acceleration and expansion profiles otherwise undetected, and enabling a determination of the varying speeds attained across the span of the CME. The potential for a 3D characterization of the internal structure of CMEs is also demonstrated.

  17. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  18. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  19. Probabilistic Digital Elevation Model Generation For Spatial Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Jalobeanu, A.

    2008-12-01

    We propose a new method for the measurement of high resolution topography from a stereo pair. The main application area is the study of planetary surfaces. Digital elevation models (DEM) computed from image pairs using state of the art algorithms usually lack quantitative error estimates. This can be a major issue when the result is used to measure actual physical parameters, such as slope or terrain roughness. Thus, we propose a new method to infer a dense bidimensional disparity map from two images, that also estimates the spatial distribution of errors. We adopt a probabilistic approach, which provides a rigorous framework for parameter estimation and uncertainty evaluation. All the parameters are described in terms of random variables within a Bayesian framework. We start by defining a forward model, which mainly consists of warping the observed scene using B-Splines and using a spatially adaptive radiometric change map for robustness purposes. An a priori smoothness model is introduced in order to stabilize the solution. Solving the inverse problem to recover the disparity map requires to optimize a global non-convex energy function, which is difficult in practice due to multiple local optima. A deterministic optimization technique based on a multi-grid strategy, followed by a local energy analysis at the optimum, allows to recover the a posteriori probability density function (pdf) of the disparity, which encodes both the optimal solution and the related error map. Finally, the disparity field is converted into a DEM through a geometric camera model. This camera model is either known initially, or calibrated automatically using the estimated disparity map and available measurements of the topography (existing low-resolution DEM or ground control points). Automatic calibration from uncertain disparity and topography measurements allows for efficient error propagation from the initial data to the generated elevation model. Results from Mars Express HRSC data

  20. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  1. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  2. Technical note: A physical phantom for assessment of accuracy of deformable alignment algorithms

    SciTech Connect

    Kashani, Rojano; Hub, Martina; Kessler, Marc L.; Balter, James M.

    2007-07-15

    The purpose of this study was to investigate the feasibility of a simple deformable phantom as a QA tool for testing and validation of deformable image registration algorithms. A diagnostic thoracic imaging phantom with a deformable foam insert was used in this study. Small plastic markers were distributed through the foam to create a lattice with a measurable deformation as the ground truth data for all comparisons. The foam was compressed in the superior-inferior direction using a one-dimensional drive stage pushing a flat 'diaphragm' to create deformations similar to those from inhale and exhale states. Images were acquired at different compressions of the foam and the location of every marker was manually identified on each image volume to establish a known deformation field with a known accuracy. The markers were removed digitally from corresponding images prior to registration. Different image registration algorithms were tested using this method. Repeat measurement of marker positions showed an accuracy of better than 1 mm in identification of the reference marks. Testing the method on several image registration algorithms showed that the system is capable of evaluating errors quantitatively. This phantom is able to quantitatively assess the accuracy of deformable image registration, using a measure of accuracy that is independent of the signals that drive the deformation parameters.

  3. An assessment of template-guided implant surgery in terms of accuracy and related factors

    PubMed Central

    Lee, Jee-Ho; Park, Ji-Man; Kim, Soung-Min; Kim, Myung-Joo; Lee, Jong-Ho

    2013-01-01

    PURPOSE Template-guided implant therapy has developed hand-in-hand with computed tomography (CT) to improve the accuracy of implant surgery and future prosthodontic treatment. In our present study, the accuracy and causative factors for computer-assisted implant surgery were assessed to further validate the stable clinical application of this technique. MATERIALS AND METHODS A total of 102 implants in 48 patients were included in this study. Implant surgery was performed with a stereolithographic template. Pre- and post-operative CTs were used to compare the planned and placed implants. Accuracy and related factors were statistically analyzed with the Spearman correlation method and the linear mixed model. Differences were considered to be statistically significant at P≤.05. RESULTS The mean errors of computer-assisted implant surgery were 1.09 mm at the coronal center, 1.56 mm at the apical center, and the axis deviation was 3.80°. The coronal and apical errors of the implants were found to be strongly correlated. The errors developed at the coronal center were magnified at the apical center by the fixture length. The case of anterior edentulous area and longer fixtures affected the accuracy of the implant template. CONCLUSION The control of errors at the coronal center and stabilization of the anterior part of the template are needed for safe implant surgery and future prosthodontic treatment. PMID:24353883

  4. On the accuracy of Whitham's method. [for steady ideal gas flow past cones

    NASA Technical Reports Server (NTRS)

    Zahalak, G. I.; Myers, M. K.

    1974-01-01

    The steady flow of an ideal gas past a conical body is studied by the method of matched asymptotic expansions and by Whitham's method in order to assess the accuracy of the latter. It is found that while Whitham's method does not yield a correct asymptotic representation of the perturbation field to second order in regions where the flow ahead of the Mach cone of the apex is disturbed, it does correctly predict the changes of the second-order perturbation quantities across a shock (the first-order shock strength). The results of the analysis are illustrated by a special case of a flat, rectangular plate at incidence.

  5. Procedural Documentation and Accuracy Assessment of Bathymetric Maps and Area/Capacity Tables for Small Reservoirs

    USGS Publications Warehouse

    Wilson, Gary L.; Richards, Joseph M.

    2006-01-01

    Because of the increasing use and importance of lakes for water supply to communities, a repeatable and reliable procedure to determine lake bathymetry and capacity is needed. A method to determine the accuracy of the procedure will help ensure proper collection and use of the data and resulting products. It is important to clearly define the intended products and desired accuracy before conducting the bathymetric survey to ensure proper data collection. A survey-grade echo sounder and differential global positioning system receivers were used to collect water-depth and position data in December 2003 at Sugar Creek Lake near Moberly, Missouri. Data were collected along planned transects, with an additional set of quality-assurance data collected for use in accuracy computations. All collected data were imported into a geographic information system database. A bathymetric surface model, contour map, and area/capacity tables were created from the geographic information system database. An accuracy assessment was completed on the collected data, bathymetric surface model, area/capacity table, and contour map products. Using established vertical accuracy standards, the accuracy of the collected data, bathymetric surface model, and contour map product was 0.67 foot, 0.91 foot, and 1.51 feet at the 95 percent confidence level. By comparing results from different transect intervals with the quality-assurance transect data, it was determined that a transect interval of 1 percent of the longitudinal length of Sugar Creek Lake produced nearly as good results as 0.5 percent transect interval for the bathymetric surface model, area/capacity table, and contour map products.

  6. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    USGS Publications Warehouse

    Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.

  7. Assessing the quality of studies on the diagnostic accuracy of tumor markers

    PubMed Central

    Goebell, Peter J.; Kamat, Ashish M.; Sylvester, Richard J.; Black, Peter; Droller, Michael; Godoy, Guilherme; Hudson, M’Liss A.; Junker, Kerstin; Kassouf, Wassim; Knowles, Margaret A.; Schulz, Wolfgang A.; Seiler, Roland; Schmitz-Dräger, Bernd J.

    2015-01-01

    Objectives With rapidly increasing numbers of publications, assessments of study quality, reporting quality, and classification of studies according to their level of evidence or developmental stage have become key issues in weighing the relevance of new information reported. Diagnostic marker studies are often criticized for yielding highly discrepant and even controversial results. Much of this discrepancy has been attributed to differences in study quality. So far, numerous tools for measuring study quality have been developed, but few of them have been used for systematic reviews and meta-analysis. This is owing to the fact that most tools are complicated and time consuming, suffer from poor reproducibility, and do not permit quantitative scoring. Methods The International Bladder Cancer Network (IBCN) has adopted this problem and has systematically identified the more commonly used tools developed since 2000. Results In this review, those tools addressing study quality (Quality Assessment of Studies of Diagnostic Accuracy and Newcastle-Ottawa Scale), reporting quality (Standards for Reporting of Diagnostic Accuracy), and developmental stage (IBCN phases) of studies on diagnostic markers in bladder cancer are introduced and critically analyzed. Based upon this, the IBCN has launched an initiative to assess and validate existing tools with emphasis on diagnostic bladder cancer studies. Conclusions The development of simple and reproducible tools for quality assessment of diagnostic marker studies permitting quantitative scoring is suggested. PMID:25159014

  8. Vertical root fracture: Biological effects and accuracy of diagnostic imaging methods

    PubMed Central

    Baageel, Turki M.; Allah, Emad Habib; Bakalka, Ghaida T.; Jadu, Fatima; Yamany, Ibrahim; Jan, Ahmed M.; Bogari, Dania F.; Alhazzazi, Turki Y.

    2016-01-01

    This review assessed the most up-to-date literature on the accuracy of detecting vertical root fractures (VRFs] using the currently available diagnostic imaging methods. In addition, an overview of the biological and clinical aspects of VRFs will also be discussed. A systematic review of the literature was initiated in December of 2015 and then updated in May of 2016. The electronic databases searched included PubMed, Emabse, Ovid, and Google Scholar. An assessment of the methodological quality was performed using a modified version of the quality assessment of diagnostic accuracy studies tool. Twenty-two studies were included in this systematic review after applying specific inclusion and exclusion criteria. Of those, 12 favored using cone beam computed tomography (CBCT) for detecting VRF as compared to periapical radiographs, whereas 5 reported no differences between the two methods. The remaining 5 studies confirmed the advantages associated with using CBCT when diagnosing VRF and described the parameters and limitations associated with this method, but they were not comparative studies. In conclusion, overwhelming evidence suggests that the use of CBCT is a preferred method for detecting VRFs. Nevertheless, additional well controlled and high quality studies are needed to produce solid evidence and guidelines to support the routine use of CBCT in the diagnosis of VRFs as a standard of care. PMID:27652254

  9. Vertical root fracture: Biological effects and accuracy of diagnostic imaging methods

    PubMed Central

    Baageel, Turki M.; Allah, Emad Habib; Bakalka, Ghaida T.; Jadu, Fatima; Yamany, Ibrahim; Jan, Ahmed M.; Bogari, Dania F.; Alhazzazi, Turki Y.

    2016-01-01

    This review assessed the most up-to-date literature on the accuracy of detecting vertical root fractures (VRFs] using the currently available diagnostic imaging methods. In addition, an overview of the biological and clinical aspects of VRFs will also be discussed. A systematic review of the literature was initiated in December of 2015 and then updated in May of 2016. The electronic databases searched included PubMed, Emabse, Ovid, and Google Scholar. An assessment of the methodological quality was performed using a modified version of the quality assessment of diagnostic accuracy studies tool. Twenty-two studies were included in this systematic review after applying specific inclusion and exclusion criteria. Of those, 12 favored using cone beam computed tomography (CBCT) for detecting VRF as compared to periapical radiographs, whereas 5 reported no differences between the two methods. The remaining 5 studies confirmed the advantages associated with using CBCT when diagnosing VRF and described the parameters and limitations associated with this method, but they were not comparative studies. In conclusion, overwhelming evidence suggests that the use of CBCT is a preferred method for detecting VRFs. Nevertheless, additional well controlled and high quality studies are needed to produce solid evidence and guidelines to support the routine use of CBCT in the diagnosis of VRFs as a standard of care.

  10. Vertical root fracture: Biological effects and accuracy of diagnostic imaging methods.

    PubMed

    Baageel, Turki M; Allah, Emad Habib; Bakalka, Ghaida T; Jadu, Fatima; Yamany, Ibrahim; Jan, Ahmed M; Bogari, Dania F; Alhazzazi, Turki Y

    2016-08-01

    This review assessed the most up-to-date literature on the accuracy of detecting vertical root fractures (VRFs] using the currently available diagnostic imaging methods. In addition, an overview of the biological and clinical aspects of VRFs will also be discussed. A systematic review of the literature was initiated in December of 2015 and then updated in May of 2016. The electronic databases searched included PubMed, Emabse, Ovid, and Google Scholar. An assessment of the methodological quality was performed using a modified version of the quality assessment of diagnostic accuracy studies tool. Twenty-two studies were included in this systematic review after applying specific inclusion and exclusion criteria. Of those, 12 favored using cone beam computed tomography (CBCT) for detecting VRF as compared to periapical radiographs, whereas 5 reported no differences between the two methods. The remaining 5 studies confirmed the advantages associated with using CBCT when diagnosing VRF and described the parameters and limitations associated with this method, but they were not comparative studies. In conclusion, overwhelming evidence suggests that the use of CBCT is a preferred method for detecting VRFs. Nevertheless, additional well controlled and high quality studies are needed to produce solid evidence and guidelines to support the routine use of CBCT in the diagnosis of VRFs as a standard of care. PMID:27652254

  11. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  12. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  13. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  14. The accuracy of current methods in determining the timing of epiphysiodesis.

    PubMed

    Lee, S C; Shim, J S; Seo, S W; Lim, K S; Ko, K R

    2013-07-01

    We compared the accuracy of the growth remaining method of assessing leg-length discrepancy (LLD) with the straight-line graph method, the multiplier method and their variants. We retrospectively reviewed the records of 44 patients treated by percutaneous epiphysiodesis for LLD. All were followed up until maturity. We used the modified Green-Anderson growth-remaining method (Method 1) to plan the timing of epiphysiodesis. Then we presumed that the other four methods described below were used pre-operatively for calculating the timing of epiphysiodesis. We then assumed that these four methods were used pre-operatively. Method 2 was the original Green-Anderson growth-remaining method; Method 3, Paley's multiplier method using bone age; Method 4, Paley's multiplier method using chronological age; and Method 5, Moseley's straight-line graph method. We compared 'Expected LLD at maturity with surgery' with 'Final LLD at maturity with surgery' for each method. Statistical analysis revealed that 'Expected LLD at maturity with surgery' was significantly different from 'Final LLD at maturity with surgery'. Method 2 was the most accurate. There was a significant correlation between 'Expected LLD at maturity with surgery' and 'Final LLD at maturity with surgery', the greatest correlation being with Method 2. Generally all the methods generated an overcorrected value. No method generates the precise 'Expected LLD at maturity with surgery'. It is essential that an analysis of the pattern of growth is taken into account when predicting final LLD. As many additional data as possible are required.

  15. Accuracy of audio computer-assisted self-interviewing (ACASI) and self-administered questionnaires for the assessment of sexual behavior.

    PubMed

    Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin

    2006-09-01

    This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506

  16. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-10-09

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  17. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-01-01

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  18. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  19. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  20. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment.

    PubMed

    McDevitt, J; Appiah-Kubi, K O; Tierney, R; Wright, W G

    2016-08-01

    In this study, we collected and analyzed preliminary data for the internal consistency of a new condensed model to assess vestibular and oculomotor impairments following a concussion. We also examined this model's ability to discriminate concussed athletes from healthy controls. Each participant was tested in a concussion assessment protocol that consisted of the Neurocom's Sensory Organization Test (SOT), Balance Error Scoring System exam, and a series of 8 vestibular and oculomotor assessments. Of these 10 assessments, only the SOT, near point convergence, and the signs and symptoms (S/S) scores collected following optokinetic stimulation, the horizontal eye saccades test, and the gaze stabilization test were significantly correlated with health status, and were used in further analyses. Multivariate logistic regression for binary outcomes was employed and these beta weights were used to calculate the area under the receiver operating characteristic curve ( area under the curve). The best model supported by our findings suggest that an exam consisting of the 4 SOT sensory ratios, near point convergence, and the optokinetic stimulation signs and symptoms score are sensitive in discriminating concussed athletes from healthy controls (accuracy=98.6%, AUC=0.983). However, an even more parsimonious model consisting of only the optokinetic stimulation and gaze stabilization test S/S scores and near point convergence was found to be a sensitive model for discriminating concussed athletes from healthy controls (accuracy=94.4%, AUC=0.951) without the need for expensive equipment. Although more investigation is needed, these findings will be helpful to health professionals potentially providing them with a sensitive and specific battery of simple vestibular and oculomotor assessments for concussion management. PMID:27176886

  1. Accuracy of Four Dental Age Estimation Methods in Southern Indian Children

    PubMed Central

    Sanghvi, Praveen; Perumalla, Kiran Kumar; Srinivasaraju, D.; Srinivas, Jami; Kalyan, U. Siva; Rasool, SK. Md. Iftekhar

    2015-01-01

    Introduction: For various forensic investigations of both living and dead individuals, the knowledge of the actual age or date of birth of the subject is of utmost importance. In recent years, age estimation has gained importance for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as birth certificate, marriage, beginning a job, joining the army and retirement. Developing teeth are used to assess maturity and estimate age in number of disciplines; however the accuracy of different methods has not been assessed systematically. The aim of this study was to determine the accuracy of four dental age estimation methods. Materials and Methods: Digital Orthopantomographs (OPGS) of South Indian children between the ages of 6 and 16 y who visited the department of Department of Oral medicine and Radiology of GITAM Dental College, Visakhapatnam, Andhra Pradesh, India with similar ethnic origin were assessed. Dental age was calculated using Demirjian, Willems, Nolla, and adopted Haavikko methods and the difference between estimated dental age and chronological age were compared with paired t-test and Wilcoxon signed rank test. Results: An overestimation of the dental age was observed by using Demirjian and Nolla methods (0.1±1.63, 0.47±0.83 years in total sample respectively) and an underestimation of dental age was observed by using Willems and Haavikko methods (-0.4±1.53, -2.9±1.41 years respectively in total sample). Conclusion: Nolla’s method was more accurate in estimating dental age compared to other methods. Moreover, all the four methods were found to be reliable in estimating age of individuals of unknown chronological age in South Indian children. PMID:25738008

  2. Assessing Data Accuracy When Involving Students in Authentic Paleontological Research.

    ERIC Educational Resources Information Center

    Harnik, Paul G.; Ross, Robert M.

    2003-01-01

    Regards Student-Scientist Partnerships (SSPs) as beneficial collaborations for both students and researchers. Introduces the Paleontological Research Institution (PRI), which developed and pilot tested an SSP that involved grade 4-9 students in paleontological research on Devonian marine fossil assemblages. Reports formative data assessment and…

  3. An assessment of the accuracy of orthotropic photoelasticity

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1984-01-01

    The accuracy of orthotropic photoelasticity was studied. The study consisted of both theoretical and experimental phases. In the theoretical phase a stress-optic law was developed. The stress-optic law included the effects of residual birefringence in the relation between applied stress and the material's optical response. The experimental phase had several portions. First, it was shown that four-point bending tests and the concept of an optical neutral axis could be conveniently used to calibrate the stress-optic behavior of the material. Second, the actual optical response of an orthotropic disk in diametral compression was compared with theoretical predictions. Third, the stresses in the disk were determined from the observed optical response, the stress-optic law, and a finite-difference form of the plane stress equilibrium equations. It was concluded that orthotropic photoelasticity is not as accurate as isotropic photoelasticity. This is believed to be due to the lack of good fringe resolution and the low sensitivity of most orthotropic photoelastic materials.

  4. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

    1991-01-01

    The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

  5. Rectal cancer staging: Multidetector-row computed tomography diagnostic accuracy in assessment of mesorectal fascia invasion

    PubMed Central

    Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro

    2016-01-01

    AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images

  6. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements

    PubMed Central

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-01-01

    Abstract This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements. We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement. The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively) In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001). Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of

  7. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    PubMed

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P <0.001).Our results revealed no significant differences between the intraoperative robotic grading system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw

  8. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  9. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  10. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  11. Assessing the accuracy of quantitative molecular microbial profiling.

    PubMed

    O'Sullivan, Denise M; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A; Foy, Carole A; Studholme, David J; Huggett, Jim F

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds.

  12. Assessing the Accuracy of Quantitative Molecular Microbial Profiling

    PubMed Central

    O’Sullivan, Denise M.; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A.; Foy, Carole A.; Studholme, David J.; Huggett, Jim F.

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds. PMID:25421243

  13. Method of questioning and the accuracy of eyewitness testimony.

    PubMed

    Venter, A; Louw, D A

    2005-03-01

    System variables are integrally part of factors that can be controlled by the legal system to enhance the accuracy of testimony by eyewitnesses. Apart from examining the relationship between questioning as system variable and the accuracy of testimony, the present study furthermore explores the relationship between type of questioning and certain biographical variables (occupation, age, gender and race). To achieve the aim of the study, 412 respondents consisting of 11 to 14-year-olds, university students, the public and Police College students participated and were exposed to open-ended or closed-ended questions. It was found that the participants who responded to the closed-ended questions were significantly more accurate than those who answered the open-ended questions. All the biographical groups, except the public, were more accurate in responding to the closed-ended questions. The scholars obtained the lowest scores (although not always significant) for both the open-ended and closed-ended questions. With respect to age the 18 to 25-year-olds obtained significantly higher scores than the other groups for the closed-ended questions. Whites performed significantly better than blacks in response to the open-ended and closed-ended questions. PMID:15887614

  14. Does DFT-SAPT method provide spectroscopic accuracy?

    SciTech Connect

    Shirkov, Leonid; Makarewicz, Jan

    2015-02-14

    Ground state potential energy curves for homonuclear and heteronuclear dimers consisting of noble gas atoms from He to Kr were calculated within the symmetry adapted perturbation theory based on the density functional theory (DFT-SAPT). These potentials together with spectroscopic data derived from them were compared to previous high-precision coupled cluster with singles and doubles including the connected triples theory calculations (or better if available) as well as to experimental data used as the benchmark. The impact of midbond functions on DFT-SAPT results was tested to study the convergence of the interaction energies. It was shown that, for most of the complexes, DFT-SAPT potential calculated at the complete basis set (CBS) limit is lower than the corresponding benchmark potential in the region near its minimum and hence, spectroscopic accuracy cannot be achieved. The influence of the residual term δ(HF) on the interaction energy was also studied. As a result, we have found that this term improves the agreement with the benchmark in the repulsive region for the dimers considered, but leads to even larger overestimation of potential depth D{sub e}. Although the standard hybrid exchange-correlation (xc) functionals with asymptotic correction within the second order DFT-SAPT do not provide the spectroscopic accuracy at the CBS limit, it is possible to adjust empirically basis sets yielding highly accurate results.

  15. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  16. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  17. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  18. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  19. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  20. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  1. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  2. Assessing the accuracy of Landsat Thematic Mapper classification using double sampling

    USGS Publications Warehouse

    Kalkhan, M.A.; Reich, R.M.; Stohlgren, T.J.

    1998-01-01

    Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Moutnain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5% and 32.5%, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6% and 45.6%, respectively.Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Mountain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5 per cent and 32.5 per cent, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6 per cent and 45.6 per cent, respectively.

  3. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    of wind shield improve the accuracy of precipitation measurements? · Try to find the source of the error that can be detected at tipping bucket raingauge in winter time because of use of heating power? On our poster we would like to present the answers to the questions listed above.

  4. Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance

    PubMed Central

    2015-01-01

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment. PMID:25691829

  5. Multipolar Ewald methods, 1: theory, accuracy, and performance.

    PubMed

    Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M

    2015-02-10

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.

  6. Enhancing the accuracy of knowledge discovery: a supervised learning method

    PubMed Central

    2014-01-01

    Background The amount of biomedical literature available is growing at an explosive speed, but a large amount of useful information remains undiscovered in it. Researchers can make informed biomedical hypotheses through mining this literature. Unfortunately, popular mining methods based on co-occurrence produce too many target concepts, leading to the declining relevance ranking of the potential target concepts. Methods This paper presents a new method for selecting linking concepts which exploits statistical and textual features to represent each linking concept, and then classifies them as relevant or irrelevant to the starting concepts. Relevant linking concepts are then used to discover target concepts. Results Through an evaluation it is observed textual features improve the results obtained with only statistical features. We successfully replicate Swanson's two classic discoveries and find the rankings of potentially relevant target concepts are relatively high. Conclusions The number of target concepts is greatly reduced and potentially relevant target concepts gain higher ranking by adopting only relevant linking concepts. Thus, the proposed method has the potential to help biomedical experts find the most useful and valuable target concepts effectively. PMID:25474584

  7. Assessing the accuracy of selectivity as a basis for solvent screening in extractive distillation processes

    SciTech Connect

    Momoh, S.O. )

    1991-01-01

    An important parameter for consideration in the screening of solvents for an extractive distillation process is selectivity at infinite dilution. The higher the selectivity, the better the solvent. This paper assesses the accuracy of using selectivity as a basis for solvent screening in extractive distillation processes. Three types of binary mixtures that are usually separated by an extractive distillation process are chosen for investigation. Having determined the optimum solvent feed rate to be two times the feed rate of the binary mixture, the total annual costs of extractive distillation processes for each of the chosen mixtures and for various solvents are carried out. The solvents are ranked on the basis of the total annual cost (obtained by design and costing equations) for the extractive distillation processes, and this ranking order is compared with that of selectivity at infinite dilution as determined by the UNIFAC method. This matching of selectivity with total annual cost does not produce a very good correlation.

  8. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  9. Integrated three-dimensional digital assessment of accuracy of anterior tooth movement using clear aligners

    PubMed Central

    Zhang, Xiao-Juan; He, Li; Tian, Jie; Bai, Yu-Xing; Li, Song

    2015-01-01

    Objective To assess the accuracy of anterior tooth movement using clear aligners in integrated three-dimensional digital models. Methods Cone-beam computed tomography was performed before and after treatment with clear aligners in 32 patients. Plaster casts were laser-scanned for virtual setup and aligner fabrication. Differences in predicted and achieved root and crown positions of anterior teeth were compared on superimposed maxillofacial digital images and virtual models and analyzed by Student's t-test. Results The mean discrepancies in maxillary and mandibular crown positions were 0.376 ± 0.041 mm and 0.398 ± 0.037 mm, respectively. Maxillary and mandibular root positions differed by 2.062 ± 0.128 mm and 1.941 ± 0.154 mm, respectively. Conclusions Crowns but not roots of anterior teeth can be moved to designated positions using clear aligners, because these appliances cause tooth movement by tilting motion. PMID:26629473

  10. Accuracy assessment of the integration of GNSS and a MEMS IMU in a terrestrial platform.

    PubMed

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A

    2014-11-04

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments.

  11. Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

    PubMed Central

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A.

    2014-01-01

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments. PMID:25375757

  12. Assessing the accuracy of satellite derived global and national urban maps in Kenya.

    PubMed

    Tatem, A J; Noor, A M; Hay, S I

    2005-05-15

    Ninety percent of projected global urbanization will be concentrated in low income countries (United-Nations, 2004). This will have considerable environmental, economic and public health implications for those populations. Objective and efficient methods of delineating urban extent are a cross-sectoral need complicated by a diversity of urban definition rubrics world-wide. Large-area maps of urban extents are becoming increasingly available in the public domain, as are a wide-range of medium spatial resolution satellite imagery. Here we describe the extension of a methodology based on Landsat ETM and Radarsat imagery to the production of a human settlement map of Kenya. This map was then compared with five satellite imagery-derived, global maps of urban extent at Kenya national-level, against an expert opinion coverage for accuracy assessment. The results showed the map produced using medium spatial resolution satellite imagery was of comparable accuracy to the expert opinion coverage. The five global urban maps exhibited a range of inaccuracies, emphasising that care should be taken with use of these maps at national and sub-national scale.

  13. The Eye Phone Study: reliability and accuracy of assessing Snellen visual acuity using smartphone technology

    PubMed Central

    Perera, C; Chakrabarti, R; Islam, F M A; Crowston, J

    2015-01-01

    Purpose Smartphone-based Snellen visual acuity charts has become popularized; however, their accuracy has not been established. This study aimed to evaluate the equivalence of a smartphone-based visual acuity chart with a standard 6-m Snellen visual acuity (6SVA) chart. Methods First, a review of available Snellen chart applications on iPhone was performed to determine the most accurate application based on optotype size. Subsequently, a prospective comparative study was performed by measuring conventional 6SVA and then iPhone visual acuity using the ‘Snellen' application on an Apple iPhone 4. Results Eleven applications were identified, with accuracy of optotype size ranging from 4.4–39.9%. Eighty-eight patients from general medical and surgical wards in a tertiary hospital took part in the second part of the study. The mean difference in logMAR visual acuity between the two charts was 0.02 logMAR (95% limit of agreement −0.332, 0.372 logMAR). The largest mean difference in logMAR acuity was noted in the subgroup of patients with 6SVA worse than 6/18 (n=5), who had a mean difference of two Snellen visual acuity lines between the charts (0.276 logMAR). Conclusion We did not identify a Snellen visual acuity app at the time of study, which could predict a patients standard Snellen visual acuity within one line. There was considerable variability in the optotype accuracy of apps. Further validation is required for assessment of acuity in patients with severe vision impairment. PMID:25931170

  14. Quantitative Assessment of Shockwave Lithotripsy Accuracy and the Effect of Respiratory Motion*

    PubMed Central

    Bailey, Michael R.; Shah, Anup R.; Hsi, Ryan S.; Paun, Marla; Harper, Jonathan D.

    2012-01-01

    Abstract Background and Purpose Effective stone comminution during shockwave lithotripsy (SWL) is dependent on precise three-dimensional targeting of the shockwave. Respiratory motion, imprecise targeting or shockwave alignment, and stone movement may compromise treatment efficacy. The purpose of this study was to evaluate the accuracy of shockwave targeting during SWL treatment and the effect of motion from respiration. Patients and Methods Ten patients underwent SWL for the treatment of 13 renal stones. Stones were targeted fluoroscopically using a Healthtronics Lithotron (five cases) or Dornier Compact Delta II (five cases) shockwave lithotripter. Shocks were delivered at a rate of 1 to 2 Hz with ramping shockwave energy settings of 14 to 26 kV or level 1 to 5. After the low energy pretreatment and protective pause, a commercial diagnostic ultrasound (US) imaging system was used to record images of the stone during active SWL treatment. Shockwave accuracy, defined as the proportion of shockwaves that resulted in stone motion with shockwave delivery, and respiratory stone motion were determined by two independent observers who reviewed the ultrasonographic videos. Results Mean age was 51±15 years with 60% men, and mean stone size was 10.5±3.7 mm (range 5–18 mm). A mean of 2675±303 shocks was delivered. Shockwave-induced stone motion was observed with every stone. Accurate targeting of the stone occurred in 60%±15% of shockwaves. Conclusions US imaging during SWL revealed that 40% of shockwaves miss the stone and contribute solely to tissue injury, primarily from movement with respiration. These data support the need for a device to deliver shockwaves only when the stone is in target. US imaging provides real-time assessment of stone targeting and accuracy of shockwave delivery. PMID:22471349

  15. Accuracy of velocity and power determination by the Doppler method

    NASA Technical Reports Server (NTRS)

    Rottger, J.

    1984-01-01

    When designing a Mesosphere-Stratosphere-Troposphere (MST) radar antenna one has to trade between the choices to optimize the effective aperture or to optimize the sidelobe suppression. An optimization of the aperture increases the sensitivity. Suppression of side-lobes by tapering attenuates undesirable signals which spoil the estimates of reflectivity and velocity. Generally, any sidelobe effects are equivalent to a broadening of the antenna beam. The return signal is due to a product of the antenna pattern with the varying atmospheric reflectivity structures. Thus, knowing the antenna pattern, it is in principle possible to find the signal spectra, which, however, may be a tedious computational and ambiguous procedure. For vertically pointing main beams the sidelobe effects are efficiently suppressed because of the aspect sensitivity. It follows that sidelobes are a minor problem for spaced antenna methods. However, they can be crucial for Doppler methods, which need off-vertical beams. If a sidelobe is pointing towards the zenith a larger power may be received from the vertical than off-vertical directions, but quantitative estimates of this effect are not yet known. To get an error estimate of sidelobe effects with an off-vertical main beam a 1-dimensional example is considered.

  16. Quality and accuracy assessment of nutrition information on the Web for cancer prevention.

    PubMed

    Shahar, Suzana; Shirley, Ng; Noah, Shahrul A

    2013-01-01

    This study aimed to assess the quality and accuracy of nutrition information about cancer prevention available on the Web. The keywords 'nutrition  +  diet  +  cancer  +  prevention' were submitted to the Google search engine. Out of 400 websites evaluated, 100 met the inclusion and exclusion criteria and were selected as the sample for the assessment of quality and accuracy. Overall, 54% of the studied websites had low quality, 48 and 57% had no author's name or information, respectively, 100% were not updated within 1 month during the study period and 86% did not have the Health on the Net seal. When the websites were assessed for readability using the Flesch Reading Ease test, nearly 44% of the websites were categorised as 'quite difficult'. With regard to accuracy, 91% of the websites did not precisely follow the latest WCRF/AICR 2007 recommendation. The quality scores correlated significantly with the accuracy scores (r  =  0.250, p  <  0.05). Professional websites (n  =  22) had the highest mean quality scores, whereas government websites (n  =  2) had the highest mean accuracy scores. The quality of the websites selected in this study was not satisfactory, and there is great concern about the accuracy of the information being disseminated. PMID:22957981

  17. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  18. Proposed triage categories for large-scale radiation incidents using high-accuracy biodosimetry methods.

    PubMed

    Rea, Michael E; Gougelet, Robert M; Nicolalde, Roberto J; Geiling, James A; Swartz, Harold M

    2010-02-01

    A catastrophic event such as a nuclear device detonation in a major U.S. city would cause a mass casualty with millions affected. Such a disaster would require screening to accurately and effectively identify patients likely to develop acute radiation syndrome (ARS). A primary function of such screening is to sort the unaffected, or worried-well, from those patients who will truly become symptomatic. This paper reviews the current capability of high-accuracy biodosimetry methods as screening tools for populations and reviews the current triage and medical guidelines for diagnosing and managing ARS. This paper proposes that current triage categories, which broadly categorize patients by likelihood of survival based on current symptoms, be replaced with new triage categories that use high-accuracy biodosimetry methods. Using accurate whole-body exposure dose assessment to predict ARS symptoms and subsyndromes, clinical decision-makers can designate the appropriate care setting, initiate treatment and therapies, and best allocate limited clinical resources, facilitating mass-casualty care following a nuclear disaster. PMID:20065675

  19. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  20. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  1. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  2. Improving the assessment of ICESat water altimetry accuracy accounting for autocorrelation

    NASA Astrophysics Data System (ADS)

    Abdallah, Hani; Bailly, Jean-Stéphane; Baghdadi, Nicolas; Lemarquand, Nicolas

    2011-11-01

    Given that water resources are scarce and are strained by competing demands, it has become crucial to develop and improve techniques to observe the temporal and spatial variations in the inland water volume. Due to the lack of data and the heterogeneity of water level stations, remote sensing, and especially altimetry from space, appear as complementary techniques for water level monitoring. In addition to spatial resolution and sampling rates in space or time, one of the most relevant criteria for satellite altimetry on inland water is the accuracy of the elevation data. Here, the accuracy of ICESat LIDAR altimetry product is assessed over the Great Lakes in North America. The accuracy assessment method used in this paper emphasizes on autocorrelation in high temporal frequency ICESat measurements. It also considers uncertainties resulting from both in situ lake level reference data. A probabilistic upscaling process was developed. This process is based on several successive ICESat shots averaged in a spatial transect accounting for autocorrelation between successive shots. The method also applies pre-processing of the ICESat data with saturation correction of ICESat waveforms, spatial filtering to avoid measurement disturbance from the land-water transition effects on waveform saturation and data selection to avoid trends in water elevations across space. Initially this paper analyzes 237 collected ICESat transects, consistent with the available hydrometric ground stations for four of the Great Lakes. By adapting a geostatistical framework, a high frequency autocorrelation between successive shot elevation values was observed and then modeled for 45% of the 237 transects. The modeled autocorrelation was therefore used to estimate water elevations at the transect scale and the resulting uncertainty for the 117 transects without trend. This uncertainty was 8 times greater than the usual computed uncertainty, when no temporal correlation is taken into account. This

  3. Accuracy of Optimized Branched Algorithms to Assess Activity-Specific PAEE

    PubMed Central

    Edwards, Andy G.; Hill, James O.; Byrnes, William C.; Browning, Raymond C.

    2009-01-01

    PURPOSE To assess the activity-specific accuracy achievable by branched algorithm (BA) analysis of simulated daily-living physical activity energy expenditure (PAEE) within a sedentary population. METHODS Sedentary men (n=8) and women (n=8) first performed a treadmill calibration protocol, during which heart rate (HR), accelerometry (ACC), and PAEE were measured in 1-minute epochs. From these data, HR-PAEE, and ACC-PAEE regressions were constructed and used in each of six analytic models to predict PAEE from ACC and HR data collected during a subsequent simulated daily-living protocol. Criterion PAEE was measured during both protocols via indirect calorimetry. The accuracy achieved by each model was assessed by the root mean square of the difference between model-predicted daily–living PAEE and the criterion daily-living PAEE (expressed here as % of mean daily living PAEE). RESULTS Across the range of activities an unconstrained post hoc optimized branched algorithm best predicted criterion PAEE. Estimates using individual calibration were generally more accurate than those using group calibration (14 vs. 16 % error, respectively). These analyses also performed well within each of the six daily-living activities, but systematic errors appeared for several of those activities, which may be explained by an inability of the algorithm to simultaneously accommodate a heterogeneous range of activities. Analyses of between mean square error by subject and activity suggest that optimization involving minimization of RMS for total daily-living PAEE is associated with decreased error between subjects but increased error between activities. CONCLUSION The performance of post hoc optimized branched algorithms may be limited by heterogeneity in the daily-living activities being performed. PMID:19952842

  4. Using composite images to assess accuracy in personality attribution to faces.

    PubMed

    Little, Anthony C; Perrett, David I

    2007-02-01

    Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others' personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another's personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces. PMID:17319053

  5. Assessment of the accuracy of pharmacy students' compounded solutions using vapor pressure osmometry.

    PubMed

    Kolling, William M; McPherson, Timothy B

    2013-04-12

    OBJECTIVE. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students' compounding skills. DESIGN. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. ASSESSMENT. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. CONCLUSIONS. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians.

  6. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  7. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  8. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  9. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    SciTech Connect

    Narita, Y. |; Eberl, S.; Nakamura, T.

    1996-12-31

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  10. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, G.G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  11. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  12. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  13. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.

    2015-02-01

    This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  14. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  15. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  16. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  17. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  18. In the Right Ballpark? Assessing the Accuracy of Net Price Calculators

    ERIC Educational Resources Information Center

    Anthony, Aaron M.; Page, Lindsay C.; Seldin, Abigail

    2016-01-01

    Large differences often exist between a college's sticker price and net price after accounting for financial aid. Net price calculators (NPCs) were designed to help students more accurately estimate their actual costs to attend a given college. This study assesses the accuracy of information provided by net price calculators. Specifically, we…

  19. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  20. Modifications to the accuracy assessment analysis routine MLTCRP to produce an output file

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1978-01-01

    Modifications are described that were made to the analysis program MLTCRP in the accuracy assessment software system to produce a disk output file. The output files produced by this modified program are used to aggregate data for regions greater than a single segment.

  1. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM DISCLOSURE TO SHAREHOLDERS General § 620.3 Accuracy of reports and assessment of internal... shall make any disclosure to shareholders or the general public concerning any matter required to be... person shall make such additional or corrective disclosure as is necessary to provide shareholders...

  2. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  3. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  4. The dimensional accuracy of rectangular acrylic resin specimens cured by three denture base processing methods.

    PubMed

    Salim, S; Sadamori, S; Hamada, T

    1992-06-01

    The dimensional accuracy of rectangular acrylic resin specimens was examined when they were processed by three methods: a conventional method, the SR-Ivocap system, and a microwave curing method. The dimensional accuracy was evaluated by the change of the distance vector V, which is calculated by means of measurements of the distances between fixed points on specimens. The specimen cured by the SR-Ivocap system exhibited less dimensional change (p less than 0.05) than those cured by the conventional and the microwave curing methods. The SR-Ivocap system might produce a more accurate denture base than the conventional and the microwave curing methods.

  5. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m‑2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  6. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m-2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  7. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2016-03-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  8. Subglacial bedform orientation, one-dimensional size, and directional shape measurement method accuracy

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2016-04-01

    This study is an assessment of previously reported automated methods and of a new method for measuring longitudinal subglacial bedform (LSB) morphometry. It evaluates the adequacy (accuracy and precision) of orientation, length and longitudinal asymmetry data derived from the longest straight line (LSL) enclosed by the LSB's footprint, the footprint's minimum bounding rectangle longitudinal axis (RLA) and the footprint's standard deviational ellipse (SDE) longitudinal axis (LA) (new method), and the adequacy of length based on an ellipse fitted to the area and perimeter of the footprint (elliptical length). Tests are based on 100 manually mapped drumlins and mega-scale glacial lineations representing the size and shape range of LSBs in the Puget Lowland drumlin field, WA, USA. Data from manually drawn LAs are used as reference for method evaluation. With the exception of elliptical length, errors decrease rapidly with increasing footprint elongation (decreasing potential angular divergence between LAs). For LSBs with elongation <5 and excluding the 5% largest errors (outliers), 1) the LSL, RLA and SDE methods had very small mean absolute error (MAE) in all measures (e.g., MAE <5° in orientation and <5 m in length); they can be confidently used to characterize the central tendency of LSB samples. 2) When analyzing data spatially at large cartographic scales, the LSL method should be avoided for orientation (36% of the errors were larger than 5°). 3) Elliptical length was the least accurate of all methods (MAE of 56.1 m and 15% of the errors larger than 5%); its use should be discontinued. 4) The relative adequacy of the LSL and RLA depends on footprint shape; SDE computed with the footprint's structural vertices is relatively shape-independent and is the preferred method. This study is significant also for negative-relief, and fluvial and aeolian bedforms.

  9. Limb volume measurements: comparison of accuracy and decisive parameters of the most used present methods.

    PubMed

    Chromy, Adam; Zalud, Ludek; Dobsak, Petr; Suskevic, Igor; Mrkvicova, Veronika

    2015-01-01

    Limb volume measurements are used for evaluating growth of muscle mass and effectivity of strength training. Beside sport sciences, it is used e.g. for detection of oedemas, lymphedemas or carcinomas or for examinations of muscle atrophy. There are several commonly used methods, but there is a lack of clear comparison, which shows their advantages and limits. The accuracy of each method is uncertainly estimated only. The aim of this paper is to determine and experimentally verify their accuracy and compare them among each other. Water Displacement Method (WD), three methods based on circumferential measures-Frustum Sign Model (FSM), Disc Model (DM), Partial Frustum Model (PFM) and two 3D scan based methods Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) were compared. Precise reference cylinders and limbs of two human subjects were measured 10 times by each method. Personal dependency of methods was also tested by measuring 10 times the same object by 3 different people. Accuracies: WD 0.3 %, FSM 2-8 % according person, DM, PFM 1-8 %, MRI 2 % (hand) or 8 % (finger), CT 0.5 % (hand) or 2 % (finger);times: FSM 1 min, CT 7 min, WD, DM, PFM 15 min, MRI 19 min; and more. WD was found as the best method for most of uses with best accuracy. The CT disposes with almost the same accuracy and allows measurements of specific regions (e.g. particular muscles), as same as MRI, which accuracy is worse though, but it is not harmful. Frustum Sign Model is usable for very fast estimation of limb volume, but with lower accuracy, Disc Model and Partial Frustum Model is useful in cases when Water Displacement cannot be used. PMID:26618096

  10. Apical transportation: two assessment methods.

    PubMed

    López, Fernanda Ullmann; Travessas, Juliana Andréa Corrêa; Fachin, Elaine; Fontanella, Vania; Grecca, Fabiana

    2009-08-01

    Root canal transportation can lead to treatment failure. A large number of methodologies for assessing root canal preparation have been tried in the past. This study compared two methods for apical transportation measurement: digitised images of longitudinal root sections and radiographs. Sixty upper molar mesiobuccal root canals prepared for endodontic treatment were assessed. The results did not demonstrate statistically significant differences between the two imaging methods used to evaluate root canal transportation. The two methods were proven to be equally reliable. PMID:19703081

  11. An Automated Grass-Based Procedure to Assess the Geometrical Accuracy of the Openstreetmap Paris Road Network

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.

    2016-06-01

    OpenStreetMap (OSM) is the largest spatial database of the world. One of the most frequently occurring geospatial elements within this database is the road network, whose quality is crucial for applications such as routing and navigation. Several methods have been proposed for the assessment of OSM road network quality, however they are often tightly coupled to the characteristics of the authoritative dataset involved in the comparison. This makes it hard to replicate and extend these methods. This study relies on an automated procedure which was recently developed for comparing OSM with any road network dataset. It is based on three Python modules for the open source GRASS GIS software and provides measures of OSM road network spatial accuracy and completeness. Provided that the user is familiar with the authoritative dataset used, he can adjust the values of the parameters involved thanks to the flexibility of the procedure. The method is applied to assess the quality of the Paris OSM road network dataset through a comparison against the French official dataset provided by the French National Institute of Geographic and Forest Information (IGN). The results show that the Paris OSM road network has both a high completeness and spatial accuracy. It has a greater length than the IGN road network, and is found to be suitable for applications requiring spatial accuracies up to 5-6 m. Also, the results confirm the flexibility of the procedure for supporting users in carrying out their own comparisons between OSM and reference road datasets.

  12. A Comparison of the Accuracy of Four Age Estimation Methods Based on Panoramic Radiography of Developing Teeth

    PubMed Central

    Javadinejad, Shahrzad; Sekhavati, Hajar; Ghafari, Roshanak

    2015-01-01

    Background and aims. Tooth development is widely used in determining age and state of maturity. Dental age is of high importance in forensic and pediatric dentistry and also orthodontic treatment planning .The aim of this study was to compare the accuracy of four radiographic age estimation methods. Materials and methods. Orthopantomographic images of 537 healthy children (age: 3.9-14.5 years old) were evaluated. Dental age of the subjects was determined through Demirjian’s, Willem’s, Cameriere’s, and Smith’s methods. Differences and correlations between chronological and dental ages were assessed by paired t-tests and Pearson’s correlation analysis, respectively. Results. The mean chronological age of the subjects was 8.93 ± 2.04 years. Overestimations of age were observed following the use of Demirjian’s method (0.87 ± 1.00 years), Willem’s method (0.36 ± 0.87 years), and Smith’s method (0.06 ± 0.63 years). However, Cameriere’s method underestimated age by 0.19 ± 0.86 years. While paired t-tests revealed significant differences between the mean chronological age and ages determined by Demirjian’s, Willem’s, and Cameriere’s methods (P < 0.001), such a significant difference was absent between chronological age and dental age based on Smith’s method (P = 0.079). Pearson’s correlation analysis suggested linear correlations between chronological age and dental age determined by all four methods. Conclusion. Our findings indicated Smith’s method to have the highest accuracy among the four assessed methods. How-ever, all four methods can be used with acceptable accuracy. PMID:26236431

  13. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  14. [A method for improving measuring accuracy in multi-channel impedance spectroscopy (MIS)].

    PubMed

    Thiel, F; Hartung, C

    2004-08-01

    The use of impedance spectroscopy as a diagnostic tool for the investigation of biological objects involves the consideration of numerous parameters impacting on measuring accuracy. This paper describes a calibration method for multichannel instruments that reduces the non-inconsiderable influence of frequency response variations between the channels, thus significantly increasing measuring accuracy. The method is tested in a recently developed, high-resolution, multi-channel bio-impedance analyser. Reduction of the measuring error is demonstrated, and the magnitude and phase resolution is quantified. The advantage of this method lies in its applicability to existing systems. Furthermore, an additional calibration impedance is not needed. PMID:15481406

  15. An effective timing characterization method for an accuracy-proved VLSI standard cell library

    NASA Astrophysics Data System (ADS)

    Jianhua, Jiang; Man, Liang; Lei, Wang; Yumei, Zhou

    2014-02-01

    This paper presents a method of tailoring the characterization and modeling timing of a VLSI standard cell library. The paper also presents a method to validate the reasonability of the value through accuracy analysis. In the process of designing a standard cell library, this method is applied to characterize the cell library. In addition, the error calculations of some simple circuit path delays are compared between using the characterization file and an Hspice simulation. The comparison results demonstrate the accuracy of the generated timing library file.

  16. Accuracy assessment of the large-scale dynamic ocean topography from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Chambers, D. P.; Shum, C. K.; Eanes, R. J.; Ries, J. C.; Stewart, R. H.

    1994-01-01

    The quality of TOPEX/POSEIDON determinations of the global scale dynamic ocean topography have been assessed by determining mean topography solutions for successive 10-day repeat cycles and by examining the temporal changes in the sea surface topography to identify known features. The assessment is based on the analysis of TOPEX altimeter data cycles 1 through 36. Important errors in the tide model used to correct the altimeter data have been identified. The errors were reduced significantly by use of a new tide model derived with the TOPEX/POSEIDON measurements. Maps of the global 1-year mean topography, produced using four of the most accurate of the marine geoid, show that the largest error in the dynamic ocean topography show expected features, such as the known annual hemispherical sea surface rise and fall and the seasonal variability due to monsoon influence in the Indian Ocean. Changes in the sequence of 10-day topography maps show the development and propagation of an equatorial Kelvin wave in the Pacific beginning in December 1992 with a propagation velocity of approximately 3 m/s. The observations are consistent with observed changes in the equatorial trade winds, and with tide gauge and other in situ observations of the strengthening of the El Nino. Comparison of TOPEX-determine sea surface height at points near oceanic tide gauges shows agreement at the 4 cm root-mean-square (RMS) level over the tropical Pacific. The results show that the TOPEX altimeter data set can be used to map the ocean surface with a temporal resolution of 10 days and an accuracy which is insonsistent with traditional in situ methods for the determination of sea level variations.

  17. Double hybrid functionals and the Π-system bond length alternation challenge: rivaling accuracy of post-HF methods.

    PubMed

    Wykes, Michael; Su, Neil Qiang; Xu, Xin; Adamo, Carlo; Sancho-García, Juan-Carlos

    2015-02-10

    Predicting accurate bond length alternations (BLAs) in long conjugated oligomers has been a significant challenge for electronic-structure methods for many decades, made particularly important by the close relationships between BLA and the rich optoelectronic properties of π-delocalized systems. Here, we test the accuracy of recently developed, and increasingly popular, double hybrid (DH) functionals, positioned at the top of Jacobs Ladder of DFT methods of increasing sophistication, computational cost, and accuracy, due to incorporation of MP2 correlation energy. Our test systems comprise oligomeric series of polyacetylene, polymethineimine, and polysilaacetylene up to six units long. MP2 calculations reveal a pronounced shift in BLAs between the 6-31G(d) basis set used in many studies of BLA to date and the larger cc-pVTZ basis set, but only modest shifts between cc-pVTZ and aug-cc-pVQZ results. We hence perform new reference CCSD(T)/cc-pVTZ calculations for all three series of oligomers against which we assess the performance of several families of DH functionals based on BLYP, PBE, and TPSS, along with lower-rung relatives including global- and range-separated hybrids. Our results show that DH functionals systematically improve the accuracy of BLAs relative to single hybrid functionals. xDH-PBE0 (N(4) scaling using SOS-MP2) emerges as a DH functional rivaling the BLA accuracy of SCS-MP2 (N(5) scaling), which was found to offer the best compromise between computational cost and accuracy the last time the BLA accuracy of DFT- and wave function-based methods was systematically investigated. Interestingly, xDH-PBE0 (XYG3), which differs to other DHs in that its MP2 term uses PBE0 (B3LYP) orbitals that are not self-consistent with the DH functional, is an outlier of trends of decreasing average BLA errors with increasing fractions of MP2 correlation and HF exchange. PMID:26579607

  18. Assessment Methods in Medical Education

    ERIC Educational Resources Information Center

    Norcini, John J.; McKinley, Danette W.

    2007-01-01

    Since the 1950s, there has been rapid and extensive change in the way assessment is conducted in medical education. Several new methods of assessment have been developed and implemented over this time and they have focused on clinical skills (taking a history from a patient and performing a physical examination), communication skills, procedural…

  19. Methods for Aquatic Resource Assessment

    EPA Science Inventory

    The Methods for Aquatic Resource Assessment (MARA) project consists of three main activities in support of assessing the conditions of the nation’s aquatic resources: 1) scientific support for EPA Office of Water’s national aquatic resource surveys; 2) spatial predications of riv...

  20. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  1. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  2. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  3. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  4. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    NASA Astrophysics Data System (ADS)

    Grullón, Yraida R.; Alhaddad, Bahaaeddin; Cladera, Josep R.

    2009-09-01

    Corine land cover 2000 (CLC2000) is a project jointly managed by the Joint Research Centre (JRC) and the European Environment Agency (EEA). Its aim is to update the Corine land cover database in Europe for the year 2000. Landsat-7 Enhanced Thematic Mapper (ETM) satellite images were used for the update and were acquired within the framework of the Image2000 project. Knowledge of the land status through the use of mapping CORINE Land Cover is of great importance to study of interaction land cover and land use categories in Europe scale. This paper presents the accuracy assessment methodology designed and implemented to validate the Iberian Coast CORINE Land Cover 2000 cartography. It presents an implementation of a new methodological concept for land cover data production, Object- Based classification, and automatic generalization to assess the thematic accuracy of CLC2000 by means of an independent data source based on the comparison of the land cover database with reference data derived from visual interpretation of high resolution satellite imageries for sample areas. In our case study, the existing Object-Based classifications are supported with digital maps and attribute databases. According to the quality tests performed, we computed the overall accuracy, and Kappa Coefficient. We will focus on the development of a methodology based on classification and generalization analysis for built-up areas that may improve the investigation. This study can be divided in these fundamental steps: -Extract artificial areas from land use Classifications based on Land-sat and Spot images. -Manuel interpretation for high resolution of multispectral images. -Determine the homogeneity of artificial areas by generalization process. -Overall accuracy, Kappa Coefficient and Special grid (fishnet) test for quality test. Finally, this paper will concentrate to illustrate the precise accuracy of CORINE dataset based on the above general steps.

  5. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  6. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  7. Assessing the Accuracy of Alaska National Hydrography Data for Mapping and Science

    NASA Astrophysics Data System (ADS)

    Arundel, S. T.; Yamamoto, K. H.; Mantey, K.; Vinyard-Houx, J.; Miller-Corbett, C. D.

    2012-12-01

    In July, 2011, the National Geospatial Program embarked on a large-scale Alaska Topographic Mapping Initiative. Maps will be published through the USGS US Topo program. Mapping of the state requires an understanding of the spatial quality of the National Hydrography Dataset (NHD), which is the hydrographic source for the US Topo. The NHD in Alaska was originally produced from topographic maps at 1:63,360 scale. It is critical to determine whether the NHD is accurate enough to be represented at the targeted map scale of the US Topo (1:25,000). Concerns are the spatial accuracy of data and the density of the stream network. Unsuitably low accuracy can be a result of the lower positional accuracy standards required for the original 1:63,360 scale mapping, temporal changes in water features, or any combination of these factors. Insufficient positional accuracy results in poor vertical integration with data layers of higher positional accuracy. Poor integration is readily apparent on the US Topo, particularly relative to current imagery and elevation data. In Alaska, current IFSAR-derived digital terrain models meet positional accuracy requirements for 1:24,000-scale mapping. Initial visual assessments indicate a wide range in the quality of fit between features in NHD and the IFSAR. However, no statistical analysis had been performed to quantify NHD feature accuracy. Determining the absolute accuracy is cost prohibitive, because of the need to collect independent, well-defined test points for such analysis; however, quantitative analysis of relative positional error is a feasible alternative. The purpose of this study is to determine the baseline accuracy of Alaska NHD pertinent to US Topo production, and to recommend reasonable guidelines and costs for NHD improvement and updates. A second goal is to detect error trends that might help identify areas or features where data improvements are most needed. There are four primary objectives of the study: 1. Choose study

  8. How Can We Evaluate the Accuracy of Small Stream Maps? -Focusing on Sampling Method and Statistical Analysis -

    NASA Astrophysics Data System (ADS)

    Park, J.

    2010-12-01

    The Washington State Department of Natural Resources’ (DNR) Forest Practices Habitat Conservation Plan (FPHCP) requires establishment of riparian management zones (RMZs) or equipment limitation zones (ELZs). In order to establish RMZs and ELZs, the DNR is required to update GIS-based stream maps showing the locations of type Ns (Non-fish seasonal) streams as well as type S (Shorelines of the state), type F (Fish habitat), and type Np (Non-fish perennial) streams. While there are few disputes over the positional accuracy of large streams, the representation of small streams such as Ns and small type S or F streams (less than 10’ width) have been considered to need more improvement of their positional accuracy. Numerous remotely sensed stream-mapping methods have been developed in the last several decades that use an array of remote sensing data such as aerial photography, satellite optical imagery, and Digital Elevation Model (DEM) topographic data. While the positional accuracy of the final stream map products has been considered essential to determine the map quality, the estimation or comparison of the positional accuracy of small stream map products has not been well studied, and rarely attempted by remotely sensed stream map developers. Assessments of the positional accuracy of stream maps are not covered properly because it is not easy to acquire the field reference data, especially for small streams under the canopy located in remote forest areas. More importantly, as of this writing, we are not aware of any prominent method to estimate or compare the positional accuracy of stream maps. Since general positional accuracy assessment methods for remotely sensed map products are designed for at least two dimensional features, they are not suitable for linear features such as streams. Due to the difficulties inherent in stream features, estimation methods for stream maps' accuracy have not dealt with the positional accuracy itself but the hydrological

  9. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  10. Accuracy assessment of minimum control points for UAV photography and georeferencing

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Procopiou, E.; Stavrou, G.; Gregoriou, M.

    2013-08-01

    In recent years, Autonomous Unmanned Aerial Vehicles (AUAV) became popular among researchers across disciplines because they combine many advantages. One major application is monitoring and mapping. Their ability to fly beyond eye sight autonomously, collecting data over large areas whenever, wherever, makes them excellent platform for monitoring hazardous areas or disasters. In both cases rapid mapping is needed while human access isn't always a given. Indeed, current automatic processing of aerial photos using photogrammetry and computer vision algorithms allows for rapid orthophomap production and Digital Surface Model (DSM) generation, as tools for monitoring and damage assessment. In such cases, control point measurement using GPS is either impossible, or time consuming or costly. This work investigates accuracies that can be attained using few or none control points over areas of one square kilometer, in two test sites; a typical block and a corridor survey. On board GPS data logged during AUAV's flight are being used for direct georeferencing, while ground check points are being used for evaluation. In addition various control point layouts are being tested using bundle adjustment for accuracy evaluation. Results indicate that it is possible to use on board single frequency GPS for direct georeferencing in cases of disaster management or areas without easy access, or even over featureless areas. Due to large numbers of tie points in the bundle adjustment, horizontal accuracy can be fulfilled with a rather small number of control points, but vertical accuracy may not.

  11. Studies of the accuracy of time integration methods for reaction-diffusion equations

    NASA Astrophysics Data System (ADS)

    Ropp, David L.; Shadid, John N.; Ober, Curtis C.

    2004-03-01

    In this study we present numerical experiments of time integration methods applied to systems of reaction-diffusion equations. Our main interest is in evaluating the relative accuracy and asymptotic order of accuracy of the methods on problems which exhibit an approximate balance between the competing component time scales. Nearly balanced systems can produce a significant coupling of the physical mechanisms and introduce a slow dynamical time scale of interest. These problems provide a challenging test for this evaluation and tend to reveal subtle differences between the various methods. The methods we consider include first- and second-order semi-implicit, fully implicit, and operator-splitting techniques. The test problems include a prototype propagating nonlinear reaction-diffusion wave, a non-equilibrium radiation-diffusion system, a Brusselator chemical dynamics system and a blow-up example. In this evaluation we demonstrate a "split personality" for the operator-splitting methods that we consider. While operator-splitting methods often obtain very good accuracy, they can also manifest a serious degradation in accuracy due to stability problems.

  12. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  13. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-10-09

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided.

  14. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  15. Effect of Flexural Rigidity of Tool on Machining Accuracy during Microgrooving by Ultrasonic Vibration Cutting Method

    NASA Astrophysics Data System (ADS)

    Furusawa, Toshiaki

    2010-12-01

    It is necessary to form fine holes and grooves by machining in the manufacture of equipment in the medical or information field and the establishment of such a machining technology is required. In micromachining, the use of the ultrasonic vibration cutting method is expected and examined. In this study, I experimentally form microgrooves in stainless steel SUS304 by the ultrasonic vibration cutting method and examine the effects of the shape and material of the tool on the machining accuracy. As a result, the following are clarified. The evaluation of the machining accuracy of the straightness of the finished surface revealed that there is an optimal rake angle of the tools related to the increase in cutting resistance as a result of increases in work hardening and the cutting area. The straightness is improved by using a tool with low flexural rigidity. In particular, Young's modulus more significantly affects the cutting accuracy than the shape of the tool.

  16. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  17. Effects of CT image segmentation methods on the accuracy of long bone 3D reconstructions.

    PubMed

    Rathnayaka, Kanchana; Sahama, Tony; Schuetz, Michael A; Schmutz, Beat

    2011-03-01

    An accurate and accessible image segmentation method is in high demand for generating 3D bone models from CT scan data, as such models are required in many areas of medical research. Even though numerous sophisticated segmentation methods have been published over the years, most of them are not readily available to the general research community. Therefore, this study aimed to quantify the accuracy of three popular image segmentation methods, two implementations of intensity thresholding and Canny edge detection, for generating 3D models of long bones. In order to reduce user dependent errors associated with visually selecting a threshold value, we present a new approach of selecting an appropriate threshold value based on the Canny filter. A mechanical contact scanner in conjunction with a microCT scanner was utilised to generate the reference models for validating the 3D bone models generated from CT data of five intact ovine hind limbs. When the overall accuracy of the bone model is considered, the three investigated segmentation methods generated comparable results with mean errors in the range of 0.18-0.24 mm. However, for the bone diaphysis, Canny edge detection and Canny filter based thresholding generated 3D models with a significantly higher accuracy compared to those generated through visually selected thresholds. This study demonstrates that 3D models with sub-voxel accuracy can be generated utilising relatively simple segmentation methods that are available to the general research community.

  18. Gender differences in structured risk assessment: comparing the accuracy of five instruments.

    PubMed

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P; Rogers, Robert D

    2009-04-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S. Douglas, D. Eaves, & S. D. Hart, 1997); the Risk Matrix 2000-Violence (RM2000[V]; D. Thornton et al., 2003); the Violence Risk Appraisal Guide (VRAG; V. L. Quinsey, G. T. Harris, M. E. Rice, & C. A. Cormier, 1998); the Offenders Group Reconviction Scale (OGRS; J. B. Copas & P. Marshall, 1998; R. Taylor, 1999); and the total previous convictions among prisoners, prospectively assessed prerelease. The authors compared predischarge measures with subsequent offending and instruments ranked using multivariate regression. Most instruments demonstrated significant but moderate predictive ability. The OGRS ranked highest for violence among men, and the PCL-R and HCR-20 H subscale ranked highest for violence among women. The OGRS and total previous acquisitive convictions demonstrated greatest accuracy in predicting acquisitive offending among men and women. Actuarial instruments requiring no training to administer performed as well as personality assessment and structured risk assessment and were superior among men for violence.

  19. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study

    PubMed Central

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2016-01-01

    Objective To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Methods Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. Results There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. Conclusions In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. PMID:24167034

  20. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-04-01

    There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.

  1. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  2. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  3. Assessing the Accuracy of Cone-Beam Computerized Tomography in Measuring Thinning Oral and Buccal Bone.

    PubMed

    Raskó, Zoltán; Nagy, Lili; Radnai, Márta; Piffkó, József; Baráth, Zoltán

    2016-06-01

    The aim of this study was to assess the accuracy and reliability of cone-beam computerized tomography (CBCT) in measuring thinning bone surrounding dental implants. Three implants were inserted into the mandible of a domestic pig at 6 different bone thicknesses on the vestibular and the lingual sides, and measurements were recorded using CBCT. The results were obtained, analyzed, and compared with areas without implants. Our results indicated that the bone thickness and the neighboring implants decreased the accuracy and reliability of CBCT for measuring bone volume around dental implants. We concluded that CBCT slightly undermeasured the bone thickness around the implant, both buccally and orally, compared with the same thickness without the implant. These results support that using the i-CAT NG with a 0.2 voxel size is not accurate for either qualitative or quantitative bone evaluations, especially when the bone is thinner than 0.72 mm in the horizontal dimension.

  4. Predictive accuracy of the Miller assessment for preschoolers in children with prenatal drug exposure.

    PubMed

    Fulks, Mary-Ann L; Harris, Susan R

    2005-01-01

    The Miller Assessment for Preschoolers (MAP) is a standardized test purported to identify preschool-aged children at risk for later learning difficulties. We evaluated the predictive validity of the MAP Total Score, relative to later cognitive performance and across a range of possible cut-points, in 37 preschool-aged children with prenatal drug exposure. Criterion measures were the Wechsler Preschool & Primary Scale of Intelligence-Revised (WPPSI-R), Test of Early Reading Ability-2, Peabody Picture Vocabulary Test-Revised, and Developmental Test of Visual Motor Integration. The highest predictive accuracy was demonstrated when the WPPSI-R was the criterion measure. The 14th percentile cutoff point demonstrated the highest predictive accuracy across all measures.

  5. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  6. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  7. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  8. Accuracy of three age estimation methods in children by measurements of developing teeth and carpals and epiphyses of the ulna and radius.

    PubMed

    Cameriere, Roberto; De Luca, Stefano; Biagi, Roberto; Cingolani, Mariano; Farronato, Giampietro; Ferrante, Luigi

    2012-09-01

    The aim of this study was to compare the accuracy of three methods for age estimation in children: the measurements of open apices in tooth roots (T), the ratio between the total area of carpal bones and epiphyses of the ulna and radius (HW), and the combined method (THW). The sample consisted of 288 Caucasian Italian children (152 boys and 136 girls) aged between 5 and 15 years. Accuracy was determined as the difference between estimated age and chronological age, and accuracy was assessed by analyzing individuals' orthopantomograms and hand-wrist radiographs. Accuracies were 0.41 years for girls and 0.54 years for boys with the THW method; for the HW method, 1.00 years for girls and 0.92 years for boys; and for the T method, 0.62 years for girls and 0.71 years for boys. THW is the most accurate technique for age estimation in these children.

  9. Estimation of diagnostic test accuracy without full verification: a review of latent class methods

    PubMed Central

    Collins, John; Huynh, Minh

    2014-01-01

    The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172

  10. ESA ExoMars: Pre-launch PanCam Geometric Modeling and Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Li, D.; Li, R.; Yilmaz, A.

    2014-08-01

    ExoMars is the flagship mission of the European Space Agency (ESA) Aurora Programme. The mobile scientific platform, or rover, will carry a drill and a suite of instruments dedicated to exobiology and geochemistry research. As the ExoMars rover is designed to travel kilometres over the Martian surface, high-precision rover localization and topographic mapping will be critical for traverse path planning and safe planetary surface operations. For such purposes, the ExoMars rover Panoramic Camera system (PanCam) will acquire images that are processed into an imagery network providing vision information for photogrammetric algorithms to localize the rover and generate 3-D mapping products. Since the design of the ExoMars PanCam will influence localization and mapping accuracy, quantitative error analysis of the PanCam design will improve scientists' awareness of the achievable level of accuracy, and enable the PanCam design team to optimize its design to achieve the highest possible level of localization and mapping accuracy. Based on photogrammetric principles and uncertainty propagation theory, we have developed a method to theoretically analyze how mapping and localization accuracy would be affected by various factors, such as length of stereo hard-baseline, focal length, and pixel size, etc.

  11. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions

    NASA Astrophysics Data System (ADS)

    Friedl, R.

    2016-04-01

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler's theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization like that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation.

  12. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions.

    PubMed

    Friedl, R

    2016-04-01

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler's theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization like that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation. PMID:27131682

  13. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  14. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  15. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  16. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  17. Qualitative methods for assessing risk

    SciTech Connect

    Mahn, J.A.; Hannaman, G.W.; Kryska, P.

    1995-03-01

    The purpose of this document is to describe a qualitative risk assessment process that supplements the requirements of DOE/AL 5481.1B. Although facility managers have a choice of assessing risk either quantitatively or qualitatively, trade offs are involved in making the most appropriate choice for a given application. The results that can be obtained from a quantitative risk assessment are significantly more robust than those results derived from a qualitative approach. However, the advantages derived from quantitative risk assessment are achieved at a greater expenditure of money, time and convenience. This document provides the elements of a framework for performing a much less costly qualitative risk assessment, while retaining the best attributes of quantitative methods. The approach discussed herein will; (1) provide facility managers with the tools to prepare consistent, site wide assessments, and (2) aid the reviewers who may be tasked to evaluate the assessments. Added cost/benefit measures of the qualitative methodology include the identification of mechanisms for optimally allocating resources for minimizing risk in an expeditious, and fiscally responsible manner.

  18. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  19. Assessment of the Geodetic and Color Accuracy of Multi-Pass Airborne/Mobile Lidar Data

    NASA Astrophysics Data System (ADS)

    Pack, R. T.; Petersen, B.; Sunderland, D.; Blonquist, K.; Israelsen, P.; Crum, G.; Fowles, A.; Neale, C.

    2008-12-01

    The ability to merge lidar and color image data acquired by multiple passes of an aircraft or van is largely dependent on the accuracy of the navigation system that estimates the dynamic position and orientation of the sensor. We report an assessment of the performance of a Riegl Q560 lidar transceiver combined with a Litton LN-200 inertial measurement unit (IMU) based NovAtel SPAN GPS/IMU system and a Panasonic HD Video Camera system. Several techniques are reported that were used to maximize the performance of the GPS/IMU system in generating precisely merged point clouds. The airborne data used included eight flight lines all overflying the same building on the campus at Utah State University. These lines were flown at the FAA minimum altitude of 1000 feet for fixed-wing aircraft. The mobile data was then acquired with the same system mounted to look sideways out of a van several months later. The van was driven around the same building at variable speed in order to avoid pedestrians. An absolute accuracy of about 6 cm and a relative accuracy of less than 2.5 cm one-sigma are documented for the merged data. Several techniques are also reported for merging of the color video data stream with the lidar point cloud. A technique for back-projecting and burning lidar points within the video stream enables the verification of co-boresighting accuracy. The resulting pixel-level alignment is accurate with within the size of a lidar footprint. The techniques described in this paper enable the display of high-resolution colored points of high detail and color clarity.

  20. Accuracy and repeatability of two methods of gait analysis - GaitRite™ und Mobility Lab™ - in subjects with cerebellar ataxia.

    PubMed

    Schmitz-Hübsch, Tanja; Brandt, Alexander U; Pfueller, Caspar; Zange, Leonora; Seidel, Adrian; Kühn, Andrea A; Paul, Friedemann; Minnerop, Martina; Doss, Sarah

    2016-07-01

    Instrumental gait analysis is increasingly recognized as a useful tool for the evaluation of movement disorders. The various assessment devices available to date have mostly been evaluated in healthy populations only. We aimed to explore whether reliability and validity seen in healthy subjects can also be assumed in subjects with cerebellar ataxic gait. Gait was recorded simultaneously with two devices - a sensor-embedded walkway and an inertial sensor based system - to explore test accuracy in two groups of subjects: one with mild to moderate cerebellar ataxia due to a subtype of autosomal-dominantly inherited neurodegenerative disorder (SCA14), the other were healthy subjects matched for age and height (CTR). Test precision was assessed by retest within session for each device. In conclusion, accuracy and repeatability of gait measurements were not compromised by ataxic gait disorder. The accuracy of spatial measures was speed-dependent and a direct comparison of stride length from both devices will be most reliably made at comfortable speed. Measures of stride variability had low agreement between methods in CTR and at retest in both groups. However, the marked increase of stride variability in ataxia outweighs the observed amount of imprecision. PMID:27289221

  1. Accuracy and repeatability of two methods of gait analysis - GaitRite™ und Mobility Lab™ - in subjects with cerebellar ataxia.

    PubMed

    Schmitz-Hübsch, Tanja; Brandt, Alexander U; Pfueller, Caspar; Zange, Leonora; Seidel, Adrian; Kühn, Andrea A; Paul, Friedemann; Minnerop, Martina; Doss, Sarah

    2016-07-01

    Instrumental gait analysis is increasingly recognized as a useful tool for the evaluation of movement disorders. The various assessment devices available to date have mostly been evaluated in healthy populations only. We aimed to explore whether reliability and validity seen in healthy subjects can also be assumed in subjects with cerebellar ataxic gait. Gait was recorded simultaneously with two devices - a sensor-embedded walkway and an inertial sensor based system - to explore test accuracy in two groups of subjects: one with mild to moderate cerebellar ataxia due to a subtype of autosomal-dominantly inherited neurodegenerative disorder (SCA14), the other were healthy subjects matched for age and height (CTR). Test precision was assessed by retest within session for each device. In conclusion, accuracy and repeatability of gait measurements were not compromised by ataxic gait disorder. The accuracy of spatial measures was speed-dependent and a direct comparison of stride length from both devices will be most reliably made at comfortable speed. Measures of stride variability had low agreement between methods in CTR and at retest in both groups. However, the marked increase of stride variability in ataxia outweighs the observed amount of imprecision.

  2. Testing the accuracy of the retrospective recall method used in expertise research.

    PubMed

    Howard, Robert W

    2011-12-01

    Expertise typically develops slowly over years, and controlled experiments to study its development may be impractical. Researchers often use a correlational, retrospective recall method in which participants recall career data, sometimes over many years before. However, recall accuracy is uncertain. The present study investigated the accuracy of recalled career data for up to 38 years, in over 600 international chess players. Participants' estimates of their entry year into international chess, total career games played, and number of games in a typical year were compared with the known true values. Entry year typically was recalled fairly accurately, and accuracy did not diminish systematically with time since list entry from 10 years earlier to 25 or more years earlier. On average, games-count estimates were reasonably accurate. However, some participants were very inaccurate, and some were more inaccurate in their total-games counts and entry-year estimates. The retrospective recall method yields usable data but may have some accuracy problems. Possible remedies are outlined. PMID:21671138

  3. Accuracy, precision, usability, and cost of free chlorine residual testing methods.

    PubMed

    Murray, Anna; Lantagne, Daniele

    2015-03-01

    Chlorine is the most widely used disinfectant worldwide, partially because residual protection is maintained after treatment. This residual is measured using colorimetric test kits varying in accuracy, precision, training required, and cost. Seven commercially available colorimeters, color wheel and test tube comparator kits, pool test kits, and test strips were evaluated for use in low-resource settings by: (1) measuring in quintuplicate 11 samples from 0.0-4.0 mg/L free chlorine residual in laboratory and natural light settings to determine accuracy and precision; (2) conducting volunteer testing where participants used and evaluated each test kit; and (3) comparing costs. Laboratory accuracy ranged from 5.1-40.5% measurement error, with colorimeters the most accurate and test strip methods the least. Variation between laboratory and natural light readings occurred with one test strip method. Volunteer participants found test strip methods easiest and color wheel methods most difficult, and were most confident in the colorimeter and least confident in test strip methods. Costs range from 3.50-444 USD for 100 tests. Application of a decision matrix found colorimeters and test tube comparator kits were most appropriate for use in low-resource settings; it is recommended users apply the decision matrix themselves, as the appropriate kit might vary by context.

  4. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  5. Qualitative methods for assessing risk

    SciTech Connect

    Mahn, J.A.; Hannaman, G.W.; Kryska, P.

    1995-04-01

    The Department of Energy`s (DOE) non-nuclear facilities generally require only a qualitative accident analysis to assess facility risks in accordance with DOE Order 5481.1B, Safety Analysis and Review System. Achieving a meaningful qualitative assessment of risk necessarily requires the use of suitable non-numerical assessment criteria. Typically, the methods and criteria for assigning facility-specific accident scenarios to the qualitative severity and likelihood classification system in the DOE order requires significant judgment in many applications. Systematic methods for more consistently assigning the total accident scenario frequency and associated consequences are required to substantiate and enhance future risk ranking between various activities at Sandia National Laboratories (SNL). SNL`s Risk Management and National Environmental Policy Act (NEPA) Department has developed an improved methodology for performing qualitative risk assessments in accordance wi the DOE order requirements. Products of this effort are an improved set of qualitative description that permit (1) definition of the severity for both technical and programmatic consequences that may result from a variety of accident scenarios, and (2) qualitative representation of the likelihood of occurrence. These sets of descriptions are intended to facilitate proper application of DOE criteria for assessing facility risks.

  6. Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.

    PubMed

    Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A

    2015-08-01

    Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry.

  7. Accuracy of the Generalized Self-Consistent Method in Modelling the Elastic Behaviour of Periodic Composites

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1993-01-01

    Local stress and strain fields in the unit cell of an infinite, two-dimensional, periodic fibrous lattice have been determined by an integral equation approach. The effect of the fibres is assimilated to an infinite two-dimensional array of fictitious body forces in the matrix constituent phase of the unit cell. By subtracting a volume averaged strain polarization term from the integral equation we effectively embed a finite number of unit cells in a homogenized medium in which the overall stress and strain correspond to the volume averaged stress and strain of the constrained unit cell. This paper demonstrates that the zeroth term in the governing integral equation expansion, which embeds one unit cell in the homogenized medium, corresponds to the generalized self-consistent approximation. By comparing the zeroth term approximation with higher order approximations to the integral equation summation, both the accuracy of the generalized self-consistent composite model and the rate of convergence of the integral summation can be assessed. Two example composites are studied. For a tungsten/copper elastic fibrous composite the generalized self-consistent model is shown to provide accurate, effective, elastic moduli and local field representations. The local elastic transverse stress field within the representative volume element of the generalized self-consistent method is shown to be in error by much larger amounts for a composite with periodically distributed voids, but homogenization leads to a cancelling of errors, and the effective transverse Young's modulus of the voided composite is shown to be in error by only 23% at a void volume fraction of 75%.

  8. Accuracy and feasibility of video analysis for assessing hamstring flexibility and validity of the sit-and-reach test.

    PubMed

    Mier, Constance M

    2011-12-01

    The accuracy of video analysis of the passive straight-leg raise test (PSLR) and the validity of the sit-and-reach test (SR) were tested in 60 men and women. Computer software measured static hip-joint flexion accurately. High within-session reliability of the PSLR was demonstrated (R > .97). Test-retest (separate days) reliability for SR was high in men (R = .97) and women R = .98) moderate for PSLR in men (R = .79) and women (R = .89). SR validity (PSLR as criterion) was higher in women (Day 1, r = .69; Day 2, r = .81) than men (Day 1, r = .64; Day 2, r = .66). In conclusion, video analysis is accurate and feasible for assessing static joint angles, PSLR and SR tests are very reliable methods for assessing flexibility, and the SR validity for hamstring flexibility was found to be moderate in women and low in men.

  9. Improving the accuracy of multiple integral evaluation by applying Romberg's method

    NASA Astrophysics Data System (ADS)

    Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.

    2009-02-01

    Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.

  10. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  11. The accuracy of three methods of age estimation using radiographic measurements of developing teeth.

    PubMed

    Liversidge, H M; Lyons, F; Hector, M P

    2003-01-01

    The accuracy of age estimation using three quantitative methods of developing permanent teeth was investigated. These were Mörnstad et al. [Scand. J. Dent. Res. 102 (1994) 137], Liversidge and Molleson [J. For. Sci. 44 (1999) 917] and Carels et al. [J. Biol. Bucc. 19 (1991) 297]. The sample consisted of 145 white Caucasian children (75 girls, 70 boys) aged between 8 and 13 years. Tooth length and apex width of mandibular canine, premolars and first and second molars were measured from orthopantomographs using a digitiser. These data were substituted into equations from the three methods and estimated age was calculated and compared to chronological age. Age was under-estimated in boys and girls using all the three methods; the mean difference between chronological and estimated ages for method I was -0.83 (standard deviation +/-0.96) years for boys and -0.67 (+/-0.76) years for girls; method II -0.79 (+/-0.93) and -0.63 (+/-0.92); method III -1.03 (+/-1.48) and -1.35 (+/-1.11) for boys and girls, respectively. Further analysis of age cohorts, found the most accurate method to be method I for the age group 8.00-8.99 years where age could be predicted to 0.14+/-0.44 years (boys) and 0.10+/-0.32 years (girls). Accuracy was greater for younger children compared to older children and this decreased with age.

  12. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  13. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  14. An assessment of vapour pressure estimation methods.

    PubMed

    O'Meara, Simon; Booth, Alastair Murray; Barley, Mark Howard; Topping, David; McFiggans, Gordon

    2014-09-28

    Laboratory measurements of vapour pressures for atmospherically relevant compounds were collated and used to assess the accuracy of vapour pressure estimates generated by seven estimation methods and impacts on predicted secondary organic aerosol. Of the vapour pressure estimation methods that were applicable to all the test set compounds, the Lee-Kesler [Reid et al., The Properties of Gases and Liquids, 1987] method showed the lowest mean absolute error and the Nannoolal et al. [Nannoonal et al., Fluid Phase Equilib., 2008, 269, 117-133] method showed the lowest mean bias error (when both used normal boiling points estimated using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] method). The effect of varying vapour pressure estimation methods on secondary organic aerosol (SOA) mass loading and composition was investigated using an absorptive partitioning equilibrium model. The Myrdal and Yalkowsky [Myrdal and Yalkowsky, Ind. Eng. Chem. Res., 1997, 36, 2494-2499] vapour pressure estimation method using the Nannoolal et al. [Nannoolal et al., Fluid Phase Equilib., 2004, 226, 45-63] normal boiling point gave the most accurate estimation of SOA loading despite not being the most accurate for vapour pressures alone. PMID:25105180

  15. Dietary assessment methods: dietary records.

    PubMed

    Ortega, Rosa M; Pérez-Rodrigo, Carmen; López-Sobaler, Ana M

    2015-02-26

    Dietary records or food diaries can be highlighted among dietary assessment methods of the current diet for their interest and validity. It is a prospective, open-ended survey method collecting data about the foods and beverages consumed over a previously specified period of time. Dietary records can be used to estimate current diet of individuals and population groups, as well as to identify groups at risk of inadequacy. It is a dietary assessment method interesting for its use in epidemiological or in clinical studies. High validity and precision has been reported for the method when used following adequate procedures and considering the sufficient number of days. Thus, dietary records are often considered as a reference method in validation studies. Nevertheless, the method is affected by error and has limitations due mainly to the tendency of subjects to report food consumption close to those socially desirable. Additional problems are related to the high burden posed on respondents. The method can also influence food behavior in respondents in order to simplify the registration of food intake and some subjects can experience difficulties in writing down the foods and beverages consumed or in describing the portion sizes. Increasing the number of days observed reduces the quality of completed diet records. It should also be considered the high cost of coding and processing information collected in diet records. One of the main advantages of the method is the registration of the foods and beverages as consumed, thus reducing the problem of food omissions due to memory failure. Weighted food records provide more precise estimates of consumed portions. New Technologies can be helpful to improve and ease collaboration of respondents, as well as precision of the estimates, although it would be desirable to evaluate the advantages and limitations in order to optimize the implementation.

  16. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  17. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  18. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  19. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach.

    PubMed

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points--with 8 common points at water surface--and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy.

  20. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  1. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  2. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  3. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  4. The diagnostic accuracy of pharmacological stress echocardiography for the assessment of coronary artery disease: a meta-analysis

    PubMed Central

    Picano, Eugenio; Molinaro, Sabrina; Pasanisi, Emilio

    2008-01-01

    Background Recent American Heart Association/American College of Cardiology guidelines state that "dobutamine stress echo has substantially higher sensitivity than vasodilator stress echo for detection of coronary artery stenosis" while the European Society of Cardiology guidelines and the European Association of Echocardiography recommendations conclude that "the two tests have very similar applications". Who is right? Aim To evaluate the diagnostic accuracy of dobutamine versus dipyridamole stress echocardiography through an evidence-based approach. Methods From PubMed search, we identified all papers with coronary angiographic verification and head-to-head comparison of dobutamine stress echo (40 mcg/kg/min ± atropine) versus dipyridamole stress echo performed with state-of-the art protocols (either 0.84 mg/kg in 10' plus atropine, or 0.84 mg/kg in 6' without atropine). A total of 5 papers have been found. Pooled weight meta-analysis was performed. Results the 5 analyzed papers recruited 435 patients, 299 with and 136 without angiographically assessed coronary artery disease (quantitatively assessed stenosis > 50%). Dipyridamole and dobutamine showed similar accuracy (87%, 95% confidence intervals, CI, 83–90, vs. 84%, CI, 80–88, p = 0.48), sensitivity (85%, CI 80–89, vs. 86%, CI 78–91, p = 0.81) and specificity (89%, CI 82–94 vs. 86%, CI 75–89, p = 0.15). Conclusion When state-of-the art protocols are considered, dipyridamole and dobutamine stress echo have similar accuracy, specificity and – most importantly – sensitivity for detection of CAD. European recommendations concluding that "dobutamine and vasodilators (at appropriately high doses) are equally potent ischemic stressors for inducing wall motion abnormalities in presence of a critical coronary artery stenosis" are evidence-based. PMID:18565214

  5. Complex shape product tolerance and accuracy control method for virtual assembly

    NASA Astrophysics Data System (ADS)

    Ma, Huiping; Jin, Yuanqiang; Zhang, Xiaoguang; Zhou, Hai

    2015-02-01

    The simulation of virtual assembly process for engineering design lacks of accuracy in the software of three-dimension CAD at present. Product modeling technology with tolerance, assembly precision preanalysis technique and precision control method are developed. To solve the problem of lack of precision information transmission in CAD, tolerance mathematical model of Small Displacement Torsor (SDT) is presented, which can bring about technology transfer and establishment of digital control function for geometric elements from the definition, description, specification to the actual inspection and evaluation process. Current tolerance optimization design methods for complex shape product are proposed for optimization of machining technology, effective cost control and assembly quality of the products.

  6. High accuracy position response calibration method for a micro-channel plate ion detector

    NASA Astrophysics Data System (ADS)

    Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.

    2016-11-01

    We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.

  7. Accuracy assessment of human trunk surface 3D reconstructions from an optical digitising system.

    PubMed

    Pazos, V; Cheriet, F; Song, L; Labelle, H; Dansereau, J

    2005-01-01

    The lack of reliable techniques to follow up scoliotic deformity from the external asymmetry of the trunk leads to a general use of X-rays and indices of spinal deformity. Young adolescents with idiopathic scoliosis need intensive follow-ups for many years and, consequently, they are repeatedly exposed to ionising radiation, which is hazardous to their long-term health. Furthermore, treatments attempt to improve both spinal and surface deformities, but internal indices do not describe the external asymmetry. The purpose of this study was to assess a commercial, optical 3D digitising system for the 3D reconstruction of the entire trunk for clinical assessment of external asymmetry. The resulting surface is a textured, high-density polygonal mesh. The accuracy assessment was based on repeated reconstructions of a manikin with markers fixed on it. The average normal distance between the reconstructed surfaces and the reference data (markers measured with CMM) was 1.1 +/- 0.9 mm. PMID:15742714

  8. Accuracy Assessment of GPS Buoy Sea Level Measurements for Coastal Applications

    NASA Astrophysics Data System (ADS)

    Chiu, S.; Cheng, K.

    2008-12-01

    The GPS buoy in this study contains a geodetic antenna and a compact floater with the GPS receiver and power supply tethered to a boat. The coastal applications using GPS include monitoring of sea level and its change, calibration of satellite altimeters, hydrological or geophysical parameters modeling, seafloor geodesy, and others. Among these applications, in order to understand the overall data or model quality, it is required to gain the knowledge of position accuracy of GPS buoys or GPS-equipped vessels. Despite different new GPS data processing techniques, e.g., Precise Point Positioning (PPP) and virtual reference station (VRS), that require a prioir information obtained from the a regional GPS network. While the required a prioir information can be implemented on land, it may not be available on the sea. Hence, in this study, the GPS buoy was positioned with respect to a onshore GPS reference station using the traditional double- difference technique. Since the atmosphere starts to decorrelate as the baseline, the distance between the buoy and the reference station, increases, the positioning accuracy consequently decreases. Therefore, this study aims to assess the buoy position accuracy as the baseline increases and in order to quantify the upper limit of sea level measured by the GPS buoy. A GPS buoy campaign was conducted by National Chung Cheng University in An Ping, Taiwan with a 8- hour GPS buoy data collection. In addition, a GPS network contains 4 Continuous GPS (CGPS) stations in Taiwan was established with the goal to enable baselines in different range for buoy data processing. A vector relation from the network was utilized in order to find the correct ambiguities, which were applied to the long-baseline solution to eliminate the position error caused by incorrect ambiguities. After this procedure, a 3.6-cm discrepancy was found in the mean sea level solution between the long (~80 km) and the short (~1.5 km) baselines. The discrepancy between a

  9. SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test

    SciTech Connect

    Tang, A; Chan, K; Fee, F; Chau, R

    2014-06-01

    Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2 films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.

  10. The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures.

    PubMed

    Haury, Anne-Claire; Gestraud, Pierre; Vert, Jean-Philippe

    2011-01-01

    Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. In this study we compare 32 feature selection methods on 4 public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Surprisingly, complex wrapper and embedded methods generally do not outperform simple univariate feature selection methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results.

  11. Accuracy assessment of land cover/land use classifiers in dry and humid areas of Iran.

    PubMed

    Yousefi, Saleh; Khatami, Reza; Mountrakis, Giorgos; Mirzaee, Somayeh; Pourghasemi, Hamid Reza; Tazeh, Mehdi

    2015-10-01

    Land cover/land use (LCLU) maps are essential inputs for environmental analysis. Remote sensing provides an opportunity to construct LCLU maps of large geographic areas in a timely fashion. Knowing the most accurate classification method to produce LCLU maps based on site characteristics is necessary for the environment managers. The aim of this research is to examine the performance of various classification algorithms for LCLU mapping in dry and humid climates (from June to August). Testing is performed in three case studies from each of the two climates in Iran. The reference dataset of each image was randomly selected from the entire images and was randomly divided into training and validation set. Training sets included 400 pixels, and validation sets included 200 pixels of each LCLU. Results indicate that the support vector machine (SVM) and neural network methods can achieve higher overall accuracy (86.7 and 86.6%) than other examined algorithms, with a slight advantage for the SVM. Dry areas exhibit higher classification difficulty as man-made features often have overlapping spectral responses to soil. A further observation is that spatial segregation and lower mixture of LCLU classes can increase classification overall accuracy.

  12. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  13. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  14. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  15. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  16. Individual identification from genetic marker data: developments and accuracy comparisons of methods.

    PubMed

    Wang, Jinliang

    2016-01-01

    Genetic marker-based identification of distinct individuals and recognition of duplicated individuals has important applications in many research areas in ecology, evolutionary biology, conservation biology and forensics. The widely applied genotype mismatch (MM) method, however, is inaccurate because it relies on a fixed and suboptimal threshold number (TM ) of mismatches, and often yields self-inconsistent pairwise inferences. In this study, I improved MM method by calculating an optimal TM to accommodate the number, mistyping rates, missing data and allele frequencies of the markers. I also developed a pairwise likelihood relationship (LR) method and a likelihood clustering (LC) method for individual identification, using poor-quality data that may have high and variable rates of allelic dropouts and false alleles at genotyped loci. The 3 methods together with the relatedness (RL) method were then compared in accuracy by analysing an empirical frog data set and many simulated data sets generated under different parameter combinations. The analysis results showed that LC is generally one or two orders more accurate for individual identification than the other methods. Its accuracy is especially superior when the sampled multilocus genotypes have poor quality (i.e. teemed with genotyping errors and missing data) and highly replicated, a situation typical of noninvasive sampling used in estimating population size. Importantly, LC is the only method that guarantees to produce self-consistent results by partitioning the entire set of multilocus genotypes into distinct clusters, each cluster containing one or more genotypes that all represent the same individual. The LC and LR methods were implemented in a computer program COLONY for free download from the Internet.

  17. An epidemiologic critique of current microbial risk assessment practices: the importance of prevalence and test accuracy data.

    PubMed

    Gardner, Ian A

    2004-09-01

    Data deficiencies are impeding the development and validation of microbial risk assessment models. One such deficiency is the failure to adjust test-based (apparent) prevalence estimates to true prevalence estimates by correcting for the imperfect accuracy of tests that are used. Such adjustments will facilitate comparability of data from different populations and from the same population over time as tests change and the unbiased quantification of effects of mitigation strategies. True prevalence can be estimated from apparent prevalence using frequentist and Bayesian methods, but the latter are more flexible and can incorporate uncertainty in test accuracy and prior prevalence data. Both approaches can be used for single or multiple populations, but the Bayesian approach can better deal with clustered data, inferences for rare events, and uncertainty in multiple variables. Examples of prevalence inferences based on results of Salmonella culture are presented. The opportunity to adjust test-based prevalence estimates is predicated on the availability of sensitivity and specificity estimates. These estimates can be obtained from studies using archived gold standard (reference) samples, by screening with the new test and follow-up of test-positive and test-negative samples with a gold standard test, and by use of latent class methods, which make no assumptions about the true status of each sampling unit. Latent class analysis can be done with maximum likelihood and Bayesian methods, and an example of their use in the evaluation of tests for Toxoplasma gondii in pigs is presented. Guidelines are proposed for more transparent incorporation of test data into microbial risk assessments.

  18. Creating a Standard Set of Metrics to Assess Accuracy of Solar Forecasts: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Banunarayanan, V.; Brockway, A.; Marquis, M.; Haupt, S. E.; Brown, B.; Fowler, T.; Jensen, T.; Hamann, H.; Lu, S.; Hodge, B.; Zhang, J.; Florita, A.

    2013-12-01

    The U.S. Department of Energy (DOE) SunShot Initiative, launched in 2011, seeks to reduce the cost of solar energy systems by 75% from 2010 to 2020. In support of the SunShot Initiative, the DOE Office of Energy Efficiency and Renewable Energy (EERE) is partnering with the National Oceanic and Atmospheric Administration (NOAA) and solar energy stakeholders to improve solar forecasting. Through a funding opportunity announcement issued in the April, 2012, DOE is funding two teams - led by National Center for Atmospheric Research (NCAR), and by IBM - to perform three key activities in order to improve solar forecasts. The teams will: (1) With DOE and NOAA's leadership and significant stakeholder input, develop a standardized set of metrics to evaluate forecast accuracy, and determine the baseline and target values for these metrics; (2) Conduct research that yields a transformational improvement in weather models and methods for forecasting solar irradiance and power; and (3) Incorporate solar forecasts into the system operations of the electric power grid, and evaluate the impact of forecast accuracy on the economics and reliability of operations using the defined, standard metrics. This paper will present preliminary results on the first activity: the development of a standardized set of metrics, baselines and target values. The results will include a proposed framework for metrics development, key categories of metrics, descriptions of each of the proposed set of specific metrics to measure forecast accuracy, feedback gathered from a range of stakeholders on the metrics, and processes to determine baselines and target values for each metric. The paper will also analyze the temporal and spatial resolutions under which these metrics would apply, and conclude with a summary of the work in progress on solar forecasting activities funded by DOE.

  19. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables.

  20. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  1. Accuracy comparison of spatial interpolation methods for estimation of air temperatures in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Shim, K.; Jung, M.; Kim, S.

    2013-12-01

    Because of complex terrain, micro- as well as meso-climate variability is extreme by locations in Korea. In particular, air temperature of agricultural fields are influenced by topographic features of the surroundings making accurate interpolation of regional meteorological data from point-measured data. This study was conducted to compare accuracy of a spatial interpolation method to estimate air temperature in Korean Peninsula with the rugged terrains in South Korea. Four spatial interpolation methods including Inverse Distance Weighting (IDW), Spline, Kriging and Cokriging were tested to estimate monthly air temperature of unobserved stations. Monthly measured data sets (minimum and maximum air temperature) from 456 automatic weather station (AWS) locations in South Korea were used to generate the gridded air temperature surface. Result of cross validation showed that using Exponential theoretical model produced a lower root mean square error (RMSE) than using Gaussian theoretical model in case of Kriging and Cokriging and Spline produced the lowest RMSE of spatial interpolation methods in both maximum and minimum air temperature estimation. In conclusion, Spline showed the best accuracy among the methods, but further experiments which reflect topography effects such as temperature lapse rate are necessary to improve the prediction.

  2. Complexity and accuracy of image registration methods in SPECT-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Yin, L. S.; Tang, L.; Hamarneh, G.; Gill, B.; Celler, A.; Shcherbinin, S.; Fua, T. F.; Thompson, A.; Liu, M.; Duzenli, C.; Sheehan, F.; Moiseenko, V.

    2010-01-01

    The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.

  3. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  4. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  5. Accuracy of methods of age estimation in predicting dental age of preadolescents in South Indian children.

    PubMed

    Balla, Sudheer B; Venkat Baghirath, P; Hari Vinay, B; Vijay Kumar, J; Babu, D B Gandhi

    2016-10-01

    Age estimation in forensic context is of prime importance for criminal, civil and administrative laws. The objective of this study is to test the accuracy of 3 methods of age estimation in South Indian children (preadolescents) aged between 7 and 15 years. It is a retrospective study of orthopantamograms (OPGs) of 150 children among which 79 were boys and 71 were girls. Cameriere's, Willems and Acharya's age estimation methods were used to predict chronological age. Paired t-test was used to compare all data and relationships between continuous variables were examined using Pearson's correlation coefficient. The Cameriere's method Underestimated the real age by -0.62 years in boys and -0.54 years in girls. Both Willems and Acharya's methods overestimated age in both sexes by 0.41, 0.18 years and 0.41, 0.47 years respectively.

  6. Accuracy of methods of age estimation in predicting dental age of preadolescents in South Indian children.

    PubMed

    Balla, Sudheer B; Venkat Baghirath, P; Hari Vinay, B; Vijay Kumar, J; Babu, D B Gandhi

    2016-10-01

    Age estimation in forensic context is of prime importance for criminal, civil and administrative laws. The objective of this study is to test the accuracy of 3 methods of age estimation in South Indian children (preadolescents) aged between 7 and 15 years. It is a retrospective study of orthopantamograms (OPGs) of 150 children among which 79 were boys and 71 were girls. Cameriere's, Willems and Acharya's age estimation methods were used to predict chronological age. Paired t-test was used to compare all data and relationships between continuous variables were examined using Pearson's correlation coefficient. The Cameriere's method Underestimated the real age by -0.62 years in boys and -0.54 years in girls. Both Willems and Acharya's methods overestimated age in both sexes by 0.41, 0.18 years and 0.41, 0.47 years respectively. PMID:27428567

  7. On the convergence and accuracy of the FDTD method for nanoplasmonics.

    PubMed

    Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora

    2015-04-20

    Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large

  8. Violence risk assessment and women: predictive accuracy of the HCR-20 in a civil psychiatric sample.

    PubMed

    Garcia-Mansilla, Alexandra; Rosenfeld, Barry; Cruise, Keith R

    2011-01-01

    Research to date has not adequately demonstrated whether the HCR-20 Violence Risk Assessment Scheme (HCR-20; Webster, Douglas, Eaves, & Hart, 1997), a structured violence risk assessment measure with a robust literature supporting its validity in male samples, is a valid indicator of violence risk in women. This study utilized data from the MacArthur Study of Mental Disorder and Violence to retrospectively score an abbreviated version of HCR-20 in 827 civil psychiatric patients. HCR-20 scores and predictive accuracy of community violence were compared for men and women. Results suggested that the HCR-20 is slightly, but not significantly, better for evaluating future risk for violence in men than in women, although the magnitude of the gender differences was small and was largely limited to historical factors. The results do not indicate that the HCR-20 needs to be tailored for use in women or that it should not be used in women, but they do highlight that the HCR-20 should be used cautiously and with full awareness of its potential limitations in women.

  9. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  10. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  11. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.

    PubMed

    Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J

    2016-05-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  12. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  13. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  14. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. PMID:27423958

  15. Accuracy of pattern detection methods in the performance of golf putting.

    PubMed

    Couceiro, Micael S; Dias, Gonçalo; Mendes, Rui; Araújo, Duarte

    2013-01-01

    The authors present a comparison of the classification accuracy of 5 pattern detection methods in the performance of golf putting. The detection of the position of the golf club was performed using a computer vision technique followed by the estimation algorithm Darwinian particle swarm optimization to obtain a kinematical model of each trial. The estimated parameters of the models were subsequently used as sample of five classification algorithms: (a) linear discriminant analysis, (b) quadratic discriminant analysis, (c) naive Bayes with normal distribution, (d) naive Bayes with kernel smoothing density estimate, and (e) least squares support vector machines. Beyond testing the performance of each classification method, it was also possible to identify a putting signature that characterized each golf player. It may be concluded that these methods can be applied to the study of coordination and motor control on the putting performance, allowing for the analysis of the intra- and interpersonal variability of motor behavior in performance contexts.

  16. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; Alder, Berni J.

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  17. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    SciTech Connect

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; Alder, Berni J.

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examples highlighting the mesh adaptivity of this method are also provided.

  18. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  19. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  20. Assessing the prediction accuracy of cure in the Cox proportional hazards cure model: an application to breast cancer data.

    PubMed

    Asano, Junichi; Hirakawa, Akihiro; Hamada, Chikuma

    2014-01-01

    A cure rate model is a survival model incorporating the cure rate with the assumption that the population contains both uncured and cured individuals. It is a powerful statistical tool for prognostic studies, especially in cancer. The cure rate is important for making treatment decisions in clinical practice. The proportional hazards (PH) cure model can predict the cure rate for each patient. This contains a logistic regression component for the cure rate and a Cox regression component to estimate the hazard for uncured patients. A measure for quantifying the predictive accuracy of the cure rate estimated by the Cox PH cure model is required, as there has been a lack of previous research in this area. We used the Cox PH cure model for the breast cancer data; however, the area under the receiver operating characteristic curve (AUC) could not be estimated because many patients were censored. In this study, we used imputation-based AUCs to assess the predictive accuracy of the cure rate from the PH cure model. We examined the precision of these AUCs using simulation studies. The results demonstrated that the imputation-based AUCs were estimable and their biases were negligibly small in many cases, although ordinary AUC could not be estimated. Additionally, we introduced the bias-correction method of imputation-based AUCs and found that the bias-corrected estimate successfully compensated the overestimation in the simulation studies. We also illustrated the estimation of the imputation-based AUCs using breast cancer data.

  1. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  2. A high accuracy multi-image registration method for tracking MRI-guided robots

    NASA Astrophysics Data System (ADS)

    Shang, Weijian; Fischer, Gregory S.

    2012-02-01

    Recent studies have demonstrated an increasing number of functional surgical robots and other devices operating in the Magnetic Resonance Imaging (MRI) environment. Calibration and tracking of the robotic device is essential during such MRI-guided procedures. A fiducial tracking module is placed on the base or the end effector of the robot to localize it within the scanner, and thus the patient coordinate system. The fiducial frame represents a Z shape and is made of seven tubes filled with high contrast fluid. The frame is highlighted in the MR images and is used in localization. Compared to the former single image registration method, multiple images are used in this algorithm to calculate the position and orientation of the frame, and thus the robot. By using multiple images together, measurement error is reduced and the rigid requirement of slow to acquire high quality of images is not required. Accuracy and performance were evaluated in experiments which were operated with a Philips 3T MRI scanner. Presented is an accuracy comparison of the new method with varied number of images, and a comparison to more traditional single image registration techniques.

  3. Brief inhalation method to measure cerebral oxygen extraction fraction with PET: Accuracy determination under pathologic conditions

    SciTech Connect

    Altman, D.I.; Lich, L.L.; Powers, W.J. )

    1991-09-01

    The initial validation of the brief inhalation method to measure cerebral oxygen extraction fraction (OEF) with positron emission tomography (PET) was performed in non-human primates with predominantly normal cerebral oxygen metabolism (CMRO2). Sensitivity analysis by computer simulation, however, indicated that this method may be subject to increasing error as CMRO2 decreases. Accuracy of the method under pathologic conditions of reduced CMRO2 has not been determined. Since reduced CMRO2 values are observed frequently in newborn infants and in regions of ischemia and infarction in adults, we determined the accuracy of the brief inhalation method in non-human primates by comparing OEF measured with PET to OEF measured by arteriovenous oxygen difference (A-VO2) under pathologic conditions of reduced CMRO2 (0.27-2.68 ml 100g-1 min-1). A regression equation of OEF (PET) = 1.07 {times} OEF (A-VO2) + 0.017 (r = 0.99, n = 12) was obtained. The absolute error in oxygen extraction measured with PET was small (mean 0.03 {plus minus} 0.04, range -0.03 to 0.12) and was independent of cerebral blood flow, cerebral blood volume, CMRO2, or OEF. The percent error was higher (19 {plus minus} 37), particularly when OEF is below 0.15. These data indicate that the brief inhalation method can be used for measurement of cerebral oxygen extraction and cerebral oxygen metabolism under pathologic conditions of reduced cerebral oxygen metabolism, with these limitations borne in mind.

  4. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Xu, Lu; Yang, Cheng-Hua; Wang, Qiang; Liu, Yue-Hao; Zhao, Yuan

    2015-12-01

    The range accuracy of three-dimensional (3D) ghost imaging is derived. Based on the derived range accuracy equation, the relationship between the slicing number and the range accuracy is analyzed and an optimum slicing number (OSN) is determined. According to the OSN, an improved 3D ghost imaging algorithm is proposed to increase the range accuracy. Experimental results indicate that the slicing number can affect the range accuracy significantly and the highest range accuracy can be achieved if the 3D ghost imaging system works with OSN. Project supported by the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 61108072).

  5. VIKOR method with enhanced accuracy for multiple criteria decision making in healthcare management.

    PubMed

    Zeng, Qiang-Lin; Li, Dan-Dan; Yang, Yi-Bin

    2013-04-01

    Višekriterijumsko kompromisno rangiranje (VIKOR) method is one of the commonly used multi criteria decision making (MCDM) methods for improving the quality of decision making. VIKOR has an advantage in providing a ranking procedure for positive attributes and negative attributes when it is used and examined in decision support. However, we noticed that this method may failed to support an objective result in medical field because most medical data have normal reference ranges (e.g., for normally distributed data: NRR ∈ [μ ± 1.96σ], this limitation shows a negative effect on the acceptance of it as an effective decision supporting method in medical decision making. This paper proposes an improved VIKOR method with enhanced accuracy (ea-VIKOR) to make it suitable for such data in medical field by introducing a new data normalization method taking the original distance to the normal reference range (ODNRR) into account. In addition, an experimental example was presented to demonstrate efficiency and feasibility of the ea-VIKOR method, the results demonstrate the ability of ea-VIKOR to deal with moderate data and support the decision making in healthcare care management. For this reason, the ea-VIKOR should be considered for use as a decision support tool for future study.

  6. Improving the Accuracy of Early Diagnosis of Thyroid Nodule Type Based on the SCAD Method.

    PubMed

    Shahraki, Hadi Raeisi; Pourahmad, Saeedeh; Paydar, Shahram; Azad, Mohsen

    2016-01-01

    Although early diagnosis of thyroid nodule type is very important, the diagnostic accuracy of standard tests is a challenging issue. We here aimed to find an optimal combination of factors to improve diagnostic accuracy for distinguishing malignant from benign thyroid nodules before surgery. In a prospective study from 2008 to 2012, 345 patients referred for thyroidectomy were enrolled. The sample size was split into a training set and testing set as a ratio of 7:3. The former was used for estimation and variable selection and obtaining a linear combination of factors. We utilized smoothly clipped absolute deviation (SCAD) logistic regression to achieve the sparse optimal combination of factors. To evaluate the performance of the estimated model in the testing set, a receiver operating characteristic (ROC) curve was utilized. The mean age of the examined patients (66 male and 279 female) was 40.9 ± 13.4 years (range 15- 90 years). Some 54.8% of the patients (24.3% male and 75.7% female) had benign and 45.2% (14% male and 86% female) malignant thyroid nodules. In addition to maximum diameters of nodules and lobes, their volumes were considered as related factors for malignancy prediction (a total of 16 factors). However, the SCAD method estimated the coefficients of 8 factors to be zero and eliminated them from the model. Hence a sparse model which combined the effects of 8 factors to distinguish malignant from benign thyroid nodules was generated. An optimal cut off point of the ROC curve for our estimated model was obtained (p=0.44) and the area under the curve (AUC) was equal to 77% (95% CI: 68%-85%). Sensitivity, specificity, positive predictive value and negative predictive values for this model were 70%, 72%, 71% and 76%, respectively. An increase of 10 percent and a greater accuracy rate in early diagnosis of thyroid nodule type by statistical methods (SCAD and ANN methods) compared with the results of FNA testing revealed that the statistical modeling

  7. Method for improving terahertz band absorption spectrum measurement accuracy using noncontact sample thickness measurement.

    PubMed

    Li, Zhi; Zhang, Zhaohui; Zhao, Xiaoyan; Su, Haixia; Yan, Fang; Zhang, Han

    2012-07-10

    The terahertz absorption spectrum has a complex nonlinear relationship with sample thickness, which is normally measured mechanically with limited accuracy. As a result, the terahertz absorption spectrum is usually determined incorrectly. In this paper, an iterative algorithm is proposed to accurately determine sample thickness. This algorithm is independent of the initial value used and results in convergent calculations. Precision in sample thickness can be improved up to 0.1 μm. A more precise absorption spectrum can then be extracted. By comparing the proposed method with the traditional method based on mechanical thickness measurements, quantitative analysis experiments on a three-component amino acid mixture shows that the global error decreased from 0.0338 to 0.0301.

  8. Age estimation by measurements of developing teeth: accuracy of Cameriere's method on a Brazilian sample.

    PubMed

    Fernandes, Mário Marques; Tinoco, Rachel Lima Ribeiro; de Braganca, Daniel Pereira Parreiras; de Lima, Silas Henrique Rabelo; Francesquini Junior, Luiz; Daruge Junior, Eduardo

    2011-11-01

    Developing teeth are commonly the criteria used for age estimation in children and young adults. The method developed by Cameriere et al. (Int J Legal Med 2006;120:49-52) is based on measures of teeth with open apex, and application of a formula, to estimate chronological age of children. The present study evaluated a sample of panoramic radiographs from Brazilian children from 5 to 15 years of age, to evaluate the accuracy of the method proposed by Cameriere et al. The results has proven the system reliable for age estimation, with a median residual error of -0.014 years between chronological and estimated ages (p = 0.603). There was a slight tendency to overestimate the ages of 5-10 years and underestimate the ages of 11-15 years.

  9. Improving the Accuracy of the Boundary Integral Method Based on the Helmholtz Integral

    NASA Technical Reports Server (NTRS)

    Koopmann, G. H.; Brod, K.

    1985-01-01

    Several recent papers in the literature have been based on various forms of the Helmholtz integral to compute the radiation fields of vibrating bodies. The surface integral form is given. The symbols of P,R micron, rho,G,R,V, and S micron are acoustic pressure, source coordinate, angular frequency, fluid density, Green function, field coordinate, surface velocity and body surface respectively. A discretized form of the surface integral is also given. Solutions to the surface integral are complicated with the singularity of the Green function at R=R micron and with the uniqueness problem at interior eigen frequencies of the enclosed space. The use of the interior integral circumvents the singularity problem since the field points are chosen in the interior space of the vibrating body where a zero pressure condition exists. The interior integral form is given. The method to improve the accuracy is detailed. Examples of the method is presented for a variety of radiators.

  10. Accuracy of DXA in estimating body composition changes in elite athletes using a four compartment model as the reference method

    PubMed Central

    2010-01-01

    Background Dual-energy x-ray absorptiometry (DXA) provides an affordable and practical assessment of multiple whole body and regional body composition. However, little information is available on the assessment of changes in body composition in top-level athletes using DXA. The present study aimed to assess the accuracy of DXA in tracking body composition changes (relative fat mass [%FM], absolute fat mass [FM], and fat-free mass [FFM]) of elite male judo athletes from a period of weight stability to prior to a competition, compared to a four compartment model (4C model), as the criterion method. Methods A total of 27 elite male judo athletes (age, 22.2 ± 2.8 yrs) athletes were evaluated. Measures of body volume by air displacement plethysmography, bone mineral content assessed by DXA, and total-body water assessed by deuterium dilution were used in a 4C model. Statistical analyses included examination of the coefficient of determinant (r2), standard error of estimation (SEE), slope, intercept, and agreement between models. Results At a group level analysis, changes in %FM, FM, and FFM estimates by DXA were not significantly different from those by the 4C model. Though the regression between DXA and the 4C model did not differ from the line of identity DXA %FM, FM, and FFM changes only explained 29%, 36%, and 38% of the 4C reference values, respectively. Individual results showed that the 95% limits of agreement were -3.7 to 5.3 for %FM, -2.6 to 3.7 for FM, and -3.7 to 2.7 for FFM. The relation between the difference and the mean of the methods indicated a significant trend for %FM and FM changes with DXA overestimating at the lower ends and underestimating at the upper ends of FM changes. Conclusions Our data indicate that both at group and individual levels DXA did not present an expected accuracy in tracking changes in adiposity in elite male judo athletes. PMID:20307312

  11. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    SciTech Connect

    Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.

  12. Accuracy of task recall for epidemiological exposure assessment to construction noise

    PubMed Central

    Reeb-Whitaker, C; Seixas, N; Sheppard, L; Neitzel, R

    2004-01-01

    Aims: To validate the accuracy of construction worker recall of task and environment based information; and to evaluate the effect of task recall on estimates of noise exposure. Methods: A cohort of 25 construction workers recorded tasks daily and had dosimetry measurements weekly for six weeks. Worker recall of tasks reported on the daily activity cards was validated with research observations and compared directly to task recall at a six month interview. Results: The mean LEQ noise exposure level (dBA) from dosimeter measurements was 89.9 (n = 61) and 83.3 (n = 47) for carpenters and electricians, respectively. The percentage time at tasks reported during the interview was compared to that calculated from daily activity cards; only 2/22 tasks were different at the nominal 5% significance level. The accuracy, based on bias and precision, of percentage time reported for tasks from the interview was 53–100% (median 91%). For carpenters, the difference in noise estimates derived from activity cards (mean 91.9 dBA) was not different from those derived from the questionnaire (mean 91.7 dBA). This trend held for electricians as well. For all subjects, noise estimates derived from the activity card and the questionnaire were strongly correlated with dosimetry measurements. The average difference between the noise estimate derived from the questionnaire and dosimetry measurements was 2.0 dBA, and was independent of the actual exposure level. Conclusions: Six months after tasks were performed, construction workers were able to accurately recall the percentage time they spent at various tasks. Estimates of noise exposure based on long term recall (questionnaire) were no different from estimates derived from daily activity cards and were strongly correlated with dosimetry measurements, overestimating the level on average by 2.0 dBA. PMID:14739379

  13. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  14. Towards an assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, Jeffrey C.; Schwegler, Eric; Draeger, Erik W.; Gygi, François; Galli, Giulia

    2004-01-01

    A series of Car-Parrinello (CP) molecular dynamics simulations of water are presented, aimed at assessing the accuracy of density functional theory in describing the structural and dynamical properties of water at ambient conditions. We found negligible differences in structural properties obtained using the Perdew-Burke-Ernzerhof or the Becke-Lee-Yang-Parr exchange and correlation energy functionals; we also found that size effects, although not fully negligible when using 32 molecule cells, are rather small. In addition, we identified a wide range of values of the fictitious electronic mass (μ) entering the CP Lagrangian for which the electronic ground state is accurately described, yielding trajectories and average properties that are independent of the value chosen. However, care must be exercised not to carry out simulations outside this range, where structural properties may artificially depend on μ. In the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is overstructured compared to experiment, and a diffusion coefficient which is approximately ten times smaller.

  15. Cold pressor stress induces opposite effects on cardioceptive accuracy dependent on assessment paradigm.

    PubMed

    Schulz, André; Lass-Hennemann, Johanna; Sütterlin, Stefan; Schächinger, Hartmut; Vögele, Claus

    2013-04-01

    Interoception depends on visceral afferent neurotraffic and central control processes. Physiological arousal and organ activation provide the biochemical and mechanical basis for visceral afferent neurotraffic. Perception of visceral symptoms occurs when attention is directed toward body sensations. Clinical studies suggest that stress contributes to the generation of visceral symptoms. However, during stress exposure attention is normally shifted away from bodily signals. Therefore, the net effects of stress on interoception remain unclear. We, therefore, investigated the impact of the cold pressor test or a control intervention (each n=21) on three established laboratory paradigms to assess cardioceptive accuracy (CA): for the Schandry-paradigm, participants were asked to count heartbeats, while during the Whitehead-tasks subjects were asked to rate whether a cardiac sensation appeared simultaneously with an auditory or visual stimulus. CA was increased by stress when attention was focused on visceral sensations (Schandry), while it decreased when attention was additionally directed toward external stimuli (visual Whitehead). Explanations for these results are offered in terms of internal versus external deployment of attention, as well as specific effects of the cold pressor on the cardiovascular system.

  16. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-01

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values.

  17. The FES2014 tidal atlas, accuracy assessment for satellite altimetry and other geophysical applications

    NASA Astrophysics Data System (ADS)

    Lyard, Florent Henri; Carrère, Loren; Cancet, Mathilde; Boy, Jean-Paul; Gégout, Pascal; Lemoine, Jean-Michel

    2016-04-01

    The FES2014 tidal atlas (elaborated in a CNES-supported joint project involving the LEGOS laboratory, CLS and Noveltis) is the last release of the FES atlases series. Based on finite element hydrodynamic modelling with data assimilation, the FES atlases are routinely improved by taken advantage of the increasing duration of satellite altimetry missions. However, the most remarkable improvement in the FES2014 atlas is the unprecedentedly low level of prior misfits (i.e. between the hydrodynamic simulations and data), typically less than 1.3 centimeters RMS for the ocean M2 tide. This makes the data assimilation step much more reliable and more consistent with the true tidal dynamics, especially in shelf and coastal seas, and diminish the sensitivity of the accuracy to the observation distribution (extremely sparse or inexistent in the high latitudes). The FES2014 atlas has been validated and assessed in various geophysical applications (satellite altimetry corrections, gravimetry, etc…), showing significant improvements compared to previous FES releases and other state-of -the-art tidal atlases (such as DTU10, GOT4.8, TPXO8).

  18. Accuracy assessment and automation of free energy calculations for drug design.

    PubMed

    Christ, Clara D; Fox, Thomas

    2014-01-27

    As the free energy of binding of a ligand to its target is one of the crucial optimization parameters in drug design, its accurate prediction is highly desirable. In the present study we have assessed the average accuracy of free energy calculations for a total of 92 ligands binding to five different targets. To make this study and future larger scale applications possible we automated the setup procedure. Starting from user defined binding modes, the procedure decides which ligands to connect via a perturbation based on maximum common substructure criteria and produces all necessary parameter files for free energy calculations in AMBER 11. For the systems investigated, errors due to insufficient sampling were found to be substantial in some cases whereas differences in estimators (thermodynamic integration (TI) versus multistate Bennett acceptance ratio (MBAR)) were found to be negligible. Analytical uncertainty estimates calculated from a single free energy calculation were found to be much smaller than the sample standard deviation obtained from two independent free energy calculations. Agreement with experiment was found to be system dependent ranging from excellent to mediocre (RMSE = [0.9, 8.2, 4.7, 5.7, 8.7] kJ/mol). When restricting analyses to free energy calculations with sample standard deviations below 1 kJ/mol agreement with experiment improved (RMSE = [0.8, 6.9, 1.8, 3.9, 5.6] kJ/mol).

  19. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%.

  20. Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.

    PubMed

    Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier

    2016-08-01

    Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values. PMID:27403612

  1. Methods of geodiversity assessment and theirs application

    NASA Astrophysics Data System (ADS)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2016-04-01

    analysis is not as simple as in the case of direct methods. Indirect methods offer the possibility to assessment and mapping of geodiversity of large and not easy accessible research areas. The foregoing examples mainly tend to refer to areas at the local or regional scale. These analyses are although possible for the implementation of large spatial units, such as territories of the country or state (Zwoliński 2007, Benito-Calvo et al. 2009, Pereira et al. 2013, 2015). A fundamental difference lies in the appropriate, corresponding to the spatial scale and the specification of the study areas, selection of the assessing criterion, and above all, the input geodata. In the geodiversity assessments, access to the data in the adequate resolution and accuracy is especially important. Acquisition and integration of the geodata often requires considerable financial and temporal outlay and not infrequently could be a serious limitation to perform some analyzes. The proposal of geomorphometry based landform geodiversity indirect assessment method and single, in addition, easy to obtain source data - digital elevation model, might create new opportunities for its broad implementation across numerous disciplines. The research on the assessment of geodiversity must be regarded to be at an initial stage at present. While the conception of geodiversity itself has a reliable theoretical foundation, no universal method of its assessment has been developed yet. It is only the adoption of a generally accepted and clear methodology of geodiversity evaluation that will make it possible to implement it widely in many fields of science, administration and the management of geospace. Then geodiversity can become as important an indicator as biodiversity is today.

  2. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  3. Estimating covariate-adjusted measures of diagnostic accuracy based on pooled biomarker assessments.

    PubMed

    McMahan, Christopher S; McLain, Alexander C; Gallagher, Colin M; Schisterman, Enrique F

    2016-07-01

    There is a need for epidemiological and medical researchers to identify new biomarkers (biological markers) that are useful in determining exposure levels and/or for the purposes of disease detection. Often this process is stunted by high testing costs associated with evaluating new biomarkers. Traditionally, biomarker assessments are individually tested within a target population. Pooling has been proposed to help alleviate the testing costs, where pools are formed by combining several individual specimens. Methods for using pooled biomarker assessments to estimate discriminatory ability have been developed. However, all these procedures have failed to acknowledge confounding factors. In this paper, we propose a regression methodology based on pooled biomarker measurements that allow the assessment of the discriminatory ability of a biomarker of interest. In particular, we develop covariate-adjusted estimators of the receiver-operating characteristic curve, the area under the curve, and Youden's index. We establish the asymptotic properties of these estimators and develop inferential techniques that allow one to assess whether a biomarker is a good discriminator between cases and controls, while controlling for confounders. The finite sample performance of the proposed methodology is illustrated through simulation. We apply our methods to analyze myocardial infarction (MI) data, with the goal of determining whether the pro-inflammatory cytokine interleukin-6 is a good predictor of MI after controlling for the subjects' cholesterol levels. PMID:26927583

  4. Using Vicon system and optical method to evaluate inertial and magnetic system accuracy

    NASA Astrophysics Data System (ADS)

    Grenet, P.; Mansour, F.

    2010-06-01

    MEMS are being more accessible thank price and energy consumption. That explains the democratisation of attitude control system. Sensors found in attitude control system are accelerometers, magnetometers and gyrometers. The commercialised solutions are expensive and not optimised in sensors number and energy consumption. Movea’s system is based on low cost sensors, number optimisation and low energy consumption. It is constituted of wireless attitude control system named MotionPOD. The system has two modes: the simulation mode and the reconstruction mode, even for cinematic motions. Movea’s system is seen as a black box which becomes as entries measurements for reconstruction mode and body segment orientation for simulation one. In body motion reconstruction Vicon system is usually used because of its accuracy. With markers position it is easy to compute the orientation of each body segment if we have put enough markers. That is why Vicon system may be a good etalon for the Movea’s system. We present practical approach of the characterisation of Movea’s system and its validation. Thereby we will present criterion useful to evaluate the reconstruction accuracy. Moreover we will insist on optical methods used to extract the interesting data from Vicon.

  5. Accuracies of southwell and force/stiffness methods in the prediction of buckling strength of hypersonic aircraft wing tubular panels

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    1987-01-01

    Accuracies of the Southwell method and the force/stiffness (F/S) method are examined when the methods were used in the prediction of buckling loads of hypersonic aircraft wing tubular panels, based on nondestructive buckling test data. Various factors affecting the accuracies of the two methods were discussed. Effects of load cutoff point in the nondestructive buckling tests on the accuracies of the two methods were discussed in great detail. For the tubular panels under pure compression, the F/S method was found to give more accurate buckling load predictions than the Southwell method, which excessively overpredicts the buckling load. It was found that the Southwell method required a higher load cutoff point, as compared with the F/S method. In using the F/S method for predicting the buckling load of tubular panels under pure compression, the load cutoff point of approximately 50 percent of the critical load could give reasonably accurate predictions.

  6. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  7. In vitro assessment of the accuracy of extraoral periapical radiography in root length determination

    PubMed Central

    Nazeer, Muhammad Rizwan; Khan, Farhan Raza; Rahman, Munawwar

    2016-01-01

    Objective: To determine the accuracy of extra oral periapical radiography in obtaining root length by comparing it with the radiographs obtained from standard intraoral approach and extended distance intraoral approach. Materials and Methods: It was an in vitro, comparative study conducted at the dental clinics of Aga Khan University Hospital. ERC exemption was obtained for this work, ref number 3407Sur-ERC-14. We included premolars and molars of a standard phantom head mounted with metal and radiopaque teeth. Radiation was exposed using three radiographic approaches: Standard intraoral, extended length intraoral and extraoral. Since, the unit of analysis was individual root, thus, we had a total of 24 images. The images were stored in VixWin software. The length of the roots was determined using the scale function of the measuring tool inbuilt in the software. Data were analyzed using SPSS version 19.0 and GraphPad software. Pearson correlation coefficient and Bland–Altman test was applied to determine whether the tooth length readings obtained from three different approaches were correlated. P = 0.05 was taken as statistically significant. Results: The correlation between standard intraoral and extended intraoral was 0.97; the correlation between standard intraoral and extraoral method was 0.82 while the correlation between extended intraoral and extraoral was 0.76. The results of Bland–Altman test showed that the average discrepancy between these methods is not large enough to be considered as significant. Conclusions: It appears that the extraoral radiographic method can be used in root length determination in subjects where intraoral radiography is not possible. PMID:27011737

  8. Using Generalizability Theory to Examine the Accuracy and Validity of Large-Scale ESL Writing Assessment

    ERIC Educational Resources Information Center

    Huang, Jinyan

    2012-01-01

    Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…

  9. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  10. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  11. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  12. Method for improving accuracy of virus titration: standardization of plaque assay for Junin virus.

    PubMed

    Bushar, G; Sagripanti, J L

    1990-10-01

    Titrating infective virus is one of the most important and common techniques in virology. However, after many years of widespread use, the parameters governing the accuracy of titration values are still not well understood. It was found that under conditions currently used for virus titration, only a small percentage of virus in the inoculum is adsorbed onto the cells and thereby detected in the titration assay. The objective of our work was to establish the conditions for a plaque assay which could estimate more accurately the titer of Junin virus. Two different stain methods were compared and several parameters governing plaque formation were studied. The volume of the inoculum appeared as the most important factor affecting observed titer. A linear relationship between the volume of inoculum and the reciprocal apparent titer allowed us to estimate an absolute titer by extrapolation. The approach described here is likely to be applicable to the more accurate estimation of the titer of a wide range of virus.

  13. An investigation of the accuracy of finite difference methods in the solution of linear elasticity problems

    NASA Technical Reports Server (NTRS)

    Bauld, N. R., Jr.; Goree, J. G.

    1983-01-01

    The accuracy of the finite difference method in the solution of linear elasticity problems that involve either a stress discontinuity or a stress singularity is considered. Solutions to three elasticity problems are discussed in detail: a semi-infinite plane subjected to a uniform load over a portion of its boundary; a bimetallic plate under uniform tensile stress; and a long, midplane symmetric, fiber reinforced laminate subjected to uniform axial strain. Finite difference solutions to the three problems are compared with finite element solutions to corresponding problems. For the first problem a comparison with the exact solution is also made. The finite difference formulations for the three problems are based on second order finite difference formulas that provide for variable spacings in two perpendicular directions. Forward and backward difference formulas are used near boundaries where their use eliminates the need for fictitious grid points.

  14. [Research on Accuracy and Stability of Inversing Vegetation Chlorophyll Content by Spectral Index Method].

    PubMed

    Jiang, Hai-ling; Yang, Hang; Chen, Xiao-ping; Wang, Shu-dong; Li, Xue-ke; Liu, Kai; Cen, Yi

    2015-04-01

    Spectral index method was widely applied to the inversion of crop chlorophyll content. In the present study, PSR3500 spectrometer and SPAD-502 chlorophyll fluorometer were used to acquire the spectrum and relative chlorophyll content (SPAD value) of winter wheat leaves on May 2nd 2013 when it was at the jointing stage of winter wheat. Then the measured spectra were resampled to simulate TM multispectral data and Hyperion hyperspectral data respectively, using the Gaussian spectral response function. We chose four typical spectral indices including normalized difference vegetation index (NDVD, triangle vegetation index (TVI), the ratio of modified transformed chlorophyll absorption ratio index (MCARI) to optimized soil adjusted vegetation index (OSAVI) (MCARI/OSAVI) and vegetation index based on universal pattern decomposition (VIUPD), which were constructed with the feature bands sensitive to the vegetation chlorophyll. After calculating these spectral indices based on the resampling TM and Hyperion data, the regression equation between spectral indices and chlorophyll content was established. For TM, the result indicates that VIUPD has the best correlation with chlorophyll (R2 = 0.819 7) followed by NDVI (R2 = 0.791 8), while MCARI/OSAVI and TVI also show a good correlation with R2 higher than 0.5. For the simulated Hyperion data, VIUPD again ranks first with R2 = 0.817 1, followed by MCARI/OSAVI (R2 = 0.658 6), while NDVI and TVI show very low values with R2 less than 0.2. It was demonstrated that VIUPD has the best accuracy and stability to estimate chlorophyll of winter wheat whether using simulated TM data or Hyperion data, which reaffirms that VIUPD is comparatively sensor independent. The chlorophyll estimation accuracy and stability of MCARI/OSAVI also works well, partly because OSAVI could reduce the influence of backgrounds. Two broadband spectral indices NDVI and TVI are weak for the chlorophyll estimation of simulated Hyperion data mainly because of

  15. An experimental study of the accuracy in measurement of modulation transfer function using an edge method

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Kim, Ye-seul; Park, Hye-Suk; Lee, Young-Jin; Kim, Hee-Joung

    2015-03-01

    Image evaluation is necessary in digital radiography (DR) which is widely used in medical imaging. Among parameters of image evaluation, modulation transfer function (MTF) is the important factor in the field of medical imaging and necessary to obtain detective quantum efficiency (DQE) which represents overall performance of the detector signal-to-noise ratio. However, the accurate measurement of MTF is still not easy because of geometric effect, electric noise, quantum noise, and truncation error. Therefore, in order to improve accuracy of MTF, four experimental methods were tested in this study such as changing the tube current, applying smoothing method in edge spread function (ESF), adjusting line spread function (LSF) range, and changing tube angle. Our results showed that MTF's fluctuation was decreased by high tube current and smoothing method. However, tube current should not exceed detector saturation and smoothing in ESF causes a distortion in ESF and MTF. In addition, decreasing LSF range diminished fluctuation and the number of sampling in MTF and high tube angle generates degradation in MTF. Based on these results, excessively low tube current and the smoothing method should be avoided. Also, optimal range of LSF considering reduction of fluctuation and the number of sampling in MTF was necessary and precise tube angle is essential to obtain an accurate MTF. In conclusion, our results demonstrated that accurate MTF can be acquired.

  16. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  17. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not.

  18. Accuracy of the third molar index for assessing the legal majority of 18 years in Turkish population.

    PubMed

    Gulsahi, Ayse; De Luca, Stefano; Cehreli, S Burcak; Tirali, R Ebru; Cameriere, Roberto

    2016-09-01

    In the last few years, forced and unregistered child marriage has widely increased into Turkey. The aim of this study was to test the accuracy of cut-off value of 0.08 by measurement of third molar index (I3M) in assessing legal adult age of 18 years. Digital panoramic images of 293 Turkish children and young adults (165 girls and 128 boys), aged between 14 and 22 years, were analysed. Age distribution gradually decreases as I3M increases in both girls and boys. For girls, the sensitivity was 85.9% (95% CI 77.1-92.8%) and specificity was 100%. The proportion of correctly classified individuals was 92.7%. For boys, the sensitivity was 94.6% (95% CI 88.1-99.8%) and specificity was 100%. The proportion of correctly classified individuals was 97.6%. The cut-off value of 0.08 is a useful method to assess if a subject is older than 18 years of age or not. PMID:27344224

  19. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  20. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy.

  1. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy. PMID:26773223

  2. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  3. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  4. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  5. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes.

  6. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  7. An innovative high accuracy autonomous navigation method for the Mars rovers

    NASA Astrophysics Data System (ADS)

    Guan, Xujun; Wang, Xinlong; Fang, Jiancheng; Feng, Shaojun

    2014-11-01

    Autonomous navigation is an important function for a Mars rover to fulfill missions successfully. It is a critical technique to overcome the limitations of ground tracking and control traditionally used. This paper proposes an innovative method based on SINS (Strapdown Inertial Navigation System) with the aid of star sensors to accurately determine the rover's position and attitude. This method consists of two parts: the initial alignment and navigation. The alignment consists of a coarse position and attitude initial alignment approach and fine initial alignment approach. The coarse one is used to determine approximate position and attitude for the rover. This is followed by fine alignment to tune the approximate solution to accurate one. Upon the completion of initial alignment, the system can be used to provide real-time navigation solutions for the rover. An autonomous navigation algorithm is proposed to estimate and compensate the accumulated errors of SINS in real time. High accuracy attitude information from star sensor is used to correct errors in SINS. Simulation results demonstrate that the proposed methods can achieve a high precision autonomous navigation for Mars rovers.

  8. Accuracy of methods for calculating volumetric wear from coordinate measuring machine data of retrieved metal-on-metal hip joint implants.

    PubMed

    Lu, Zhen; McKellop, Harry A

    2014-03-01

    This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm. PMID:24531891

  9. Accuracy of methods for calculating volumetric wear from coordinate measuring machine data of retrieved metal-on-metal hip joint implants.

    PubMed

    Lu, Zhen; McKellop, Harry A

    2014-03-01

    This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm.

  10. Structural health monitoring ultrasonic thickness measurement accuracy and reliability of various time-of-flight calculation methods

    NASA Astrophysics Data System (ADS)

    Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.

    2016-02-01

    The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.

  11. Geostatistical radar-raingauge merging: A novel method for the quantification of rain estimation accuracy

    NASA Astrophysics Data System (ADS)

    Delrieu, Guy; Wijbrans, Annette; Boudevillain, Brice; Faure, Dominique; Bonnifait, Laurent; Kirstetter, Pierre-Emmanuel

    2014-09-01

    Compared to other estimation techniques, one advantage of geostatistical techniques is that they provide an index of the estimation accuracy of the variable of interest with the kriging estimation standard deviation (ESD). In the context of radar-raingauge quantitative precipitation estimation (QPE), we address in this article the question of how the kriging ESD can be transformed into a local spread of error by using the dependency of radar errors to the rain amount analyzed in previous work. The proposed approach is implemented for the most significant rain events observed in 2008 in the Cévennes-Vivarais region, France, by considering both the kriging with external drift (KED) and the ordinary kriging (OK) methods. A two-step procedure is implemented for estimating the rain estimation accuracy: (i) first kriging normalized ESDs are computed by using normalized variograms (sill equal to 1) to account for the observation system configuration and the spatial structure of the variable of interest (rainfall amount, residuals to the drift); (ii) based on the assumption of a linear relationship between the standard deviation and the mean of the variable of interest, a denormalization of the kriging ESDs is performed globally for a given rain event by using a cross-validation procedure. Despite the fact that the KED normalized ESDs are usually greater than the OK ones (due to an additional constraint in the kriging system and a weaker spatial structure of the residuals to the drift), the KED denormalized ESDs are generally smaller the OK ones, a result consistent with the better performance observed for the KED technique. The evolution of the mean and the standard deviation of the rainfall-scaled ESDs over a range of spatial (5-300 km2) and temporal (1-6 h) scales demonstrates that there is clear added value of the radar with respect to the raingauge network for the shortest scales, which are those of interest for flash-flood prediction in the considered region.

  12. Accuracy, precision, and method detection limits of quantitative PCR for airborne bacteria and fungi.

    PubMed

    Hospodsky, Denina; Yamamoto, Naomichi; Peccia, Jordan

    2010-11-01

    Real-time quantitative PCR (qPCR) for rapid and specific enumeration of microbial agents is finding increased use in aerosol science. The goal of this study was to determine qPCR accuracy, precision, and method detection limits (MDLs) within the context of indoor and ambient aerosol samples. Escherichia coli and Bacillus atrophaeus vegetative bacterial cells and Aspergillus fumigatus fungal spores loaded onto aerosol filters were considered. Efficiencies associated with recovery of DNA from aerosol filters were low, and excluding these efficiencies in quantitative analysis led to underestimating the true aerosol concentration by 10 to 24 times. Precision near detection limits ranged from a 28% to 79% coefficient of variation (COV) for the three test organisms, and the majority of this variation was due to instrument repeatability. Depending on the organism and sampling filter material, precision results suggest that qPCR is useful for determining dissimilarity between two samples only if the true differences are greater than 1.3 to 3.2 times (95% confidence level at n = 7 replicates). For MDLs, qPCR was able to produce a positive response with 99% confidence from the DNA of five B. atrophaeus cells and less than one A. fumigatus spore. Overall MDL values that included sample processing efficiencies ranged from 2,000 to 3,000 B. atrophaeus cells per filter and 10 to 25 A. fumigatus spores per filter. Applying the concepts of accuracy, precision, and MDL to qPCR aerosol measurements demonstrates that sample processing efficiencies must be accounted for in order to accurately estimate bioaerosol exposure, provides guidance on the necessary statistical rigor required to understand significant differences among separate aerosol samples, and prevents undetected (i.e., nonquantifiable) values for true aerosol concentrations that may be significant.

  13. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification

    PubMed Central

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data. PMID:24106484

  14. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification.

    PubMed

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data.

  15. Accuracy of standard measures of family planning service quality: findings from the simulated client method.

    PubMed

    Tumlinson, Katherine; Speizer, Ilene S; Curtis, Siân L; Pence, Brian W

    2014-12-01

    In the field of international family planning, quality of care as a reproductive right is widely endorsed, yet we lack validated data-collection instruments that can accurately assess quality in terms of its public health importance. This study, conducted within 19 public and private facilities in Kisumu, Kenya, used the simulated client method to test the validity of three standard data-collection instruments used in large-scale facility surveys: provider interviews, client interviews, and observation of client-provider interactions. Results found low specificity and low positive predictive values in each of the three instruments for a number of quality indicators, suggesting that the quality of care provided may be overestimated by traditional methods of measurement. Revised approaches to measuring family planning service quality may be needed to ensure accurate assessment of programs and to better inform quality-improvement interventions.

  16. Accuracy of standard measures of family planning service quality: findings from the simulated client method.

    PubMed

    Tumlinson, Katherine; Speizer, Ilene S; Curtis, Siân L; Pence, Brian W

    2014-12-01

    In the field of international family planning, quality of care as a reproductive right is widely endorsed, yet we lack validated data-collection instruments that can accurately assess quality in terms of its public health importance. This study, conducted within 19 public and private facilities in Kisumu, Kenya, used the simulated client method to test the validity of three standard data-collection instruments used in large-scale facility surveys: provider interviews, client interviews, and observation of client-provider interactions. Results found low specificity and low positive predictive values in each of the three instruments for a number of quality indicators, suggesting that the quality of care provided may be overestimated by traditional methods of measurement. Revised approaches to measuring family planning service quality may be needed to ensure accurate assessment of programs and to better inform quality-improvement interventions. PMID:25469929

  17. High-accuracy measurement of low-water-content in liquid using NIR spectral absorption method

    NASA Astrophysics Data System (ADS)

    Peng, Bao-Jin; Wan, Xu; Jin, Hong-Zhen; Zhao, Yong; Mao, He-Fa

    2005-01-01

    Water content measurement technologies are very important for quality inspection of food, medicine products, chemical products and many other industry fields. In recent years, requests for accurate low-water-content measurement in liquid are more and more exigent, and great interests have been shown from the research and experimental work. With the development and advancement of modern production and control technologies, more accurate water content technology is needed. In this paper, a novel experimental setup based on near-infrared (NIR) spectral technology and fiber-optic sensor (OFS) is presented. It has a good measurement accuracy about -/+ 0.01%, which is better, to our knowledge, than most other methods published until now. It has a high measurement resolution of 0.001% in the measurement range from zero to 0.05% for water-in-alcohol measurement, and the water-in-oil measurement is carried out as well. In addition, the advantages of this method also include pollution-free to the measured liquid, fast measurement and so on.

  18. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  19. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  20. Impedance cardiography using the Sramek-Bernstein method: accuracy and variability at rest and during exercise.

    PubMed Central

    Thomas, S H

    1992-01-01

    1. Sramek and Bernstein's method of impedance cardiography is a simple, non-invasive and inexpensive computerised way of measuring stroke volume and systolic time intervals. In this study measurements made using the method were compared with those found simultaneously using established reference techniques. 2. In healthy volunteers there was no significant bias (d) and narrow 95% limits of agreement (d +/- 2s) when impedance and mechanophonocardiographic measurements of pre-ejection period (PEP, d = 0.3, d + 2s = 7.3, d-2s = -6.6 ms), ventricular ejection time (VET, d = 1.5, d + 2s = 17.7, d-2s = 14.6 ms) and PEP/VET ratio were compared. 3. In critically ill patients there was moderate agreement between impedance and thermodilution measurements of stroke volume (d = 8.1 (P < 0.05), d + 2s = 35.5, d-2s = -19.4 ml) and drug-induced changes in stroke volume were accurately detected. 4. In healthy volunteers agreement between impedance and dye dilution measurements of stroke volume was moderate, and similar at rest and during exercise (d = 3.4, d-2s = -31.1, d + 2s = 37.9 ml), however impedance underestimated exercise-induced increases in stroke volume (P < 0.05). 5. In patients with coronary heart disease impedance measurements correlated with angiographic left ventricular ejection fraction included the PEP/VET ratio (r = -0.81), stroke volume index (r = 0.65) and Heather index (r = 0.58, all P < 0.001), however the PEP/VET ratio could not be used to estimate the left ventricular ejection fraction with sufficient accuracy. 6. This impedance method provides reproducible semi-quantitative measurements of cardiac performance and blood flow. Its use for making pharmacodynamic measurements can be justified when invasive methods are considered inappropriate. PMID:1493078

  1. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  2. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  3. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  4. Effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann method for heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Li, Like; AuYeung, Nick; Mei, Renwei; Klausner, James F.

    2016-08-01

    We present a systematic study on the effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann equation (LBE) method for Dirichlet and Neumann problems in heat and mass transfer modeling. The second-order accurate boundary condition treatments for continuous Dirichlet and Neumann problems are directly implemented for the corresponding discontinuous boundary conditions. Results from three numerical tests, including both straight and curved boundaries, are presented to show the accuracy and order of convergence of the LBE computations. Detailed error assessments are conducted for the interior temperature or concentration (denoted as a scalar ϕ) and the interior derivatives of ϕ for both types of boundary conditions, for the boundary flux in the Dirichlet problem and for the boundary ϕ values in the Neumann problem. When the discontinuity point on the straight boundary is placed at the center of the unit lattice in the Dirichlet problem, it yields only first-order accuracy for the interior distribution of ϕ, first-order accuracy for the boundary flux, and zeroth-order accuracy for the interior derivatives compared with the second-order accuracy of all quantities of interest for continuous boundary conditions. On the lattice scale, the LBE solution for the interior derivatives near the singularity is largely independent of the resolution and correspondingly the local distribution of the absolute errors is almost invariant with the changing resolution. For Neumann problems, when the discontinuity is placed at the lattice center, second-order accuracy is preserved for the interior distribution of ϕ; and a "superlinear" convergence order of 1.5 for the boundary ϕ values and first-order accuracy for the interior derivatives are obtained. For straight boundaries with the discontinuity point arbitrarily placed within the lattice and curved boundaries, the boundary flux becomes zeroth-order accurate for Dirichlet problems

  5. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  6. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  7. The Accuracy of a Method for Printing Three-Dimensional Spinal Models

    PubMed Central

    Wang, Jian-Shun; Yang, Xin-Dong; Weng, Wan-Qing; Wang, Xiang-Yang; Xu, Hua-Zi; Chi, Yong-Long; Lin, Zhong-Ke

    2015-01-01

    Background To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined. Methods Forty-five computed tomography (CT) scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again. Results Paired t-tests were used to determine the significance, set to P<0.05, of all parameter data from the radiographic images and 3D printed models. Furthermore, 88.6% of all parameters of the cervical spine, 90% of all parameters of the thoracic spine, and 94% of all parameters of the lumbar spine had Intraclass Correlation Coefficient (ICC) values >0.800. The other ICC values were <0.800 and >0.600; none were <0.600. Conclusion In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research. PMID:25915641

  8. A new method for the accuracy evaluation of a manufactured piece

    NASA Astrophysics Data System (ADS)

    Oniga, E. V.; Cardei, M.

    2015-11-01

    To evaluate the accuracy of a manufactured piece, it must be measured and compared with a reference model, namely the designed 3D model, based on geometrical elements. In this paper a new method for the precision evaluation of a manufactured piece is proposed, which implies the creation of the piece digital 3D model based on digital images and its transformation into a 3D mesh surface. The differences between the two models, the designed model and the new created one, are calculated using the Hausdorff distance. The aim of this research is to determine the differences between two 3D models, especially CAD models, with high precision, in a completely automated way. To obtain the results, a small piece has been photographed with a digital camera, that was calibrated using a 3D calibration object, a target consisting of a number of 42 points, 36 placed in the corners of 9 wood cubes with different heights and 6 of them placed at the middle of the distance between the cubes, on a board. This target was previously tested, the tests showing that using this calibration target instead of a 2D calibration grid, the precision of the final 3D model is improved with approximatly 50%. The 3D model of the manufactured piece was created using two methods. First, based on digital images, a point cloud was automatically generated and after the filtering process, the remaining points were interpolated, obtaining the piece 3D model as a mesh surface. Second, the piece 3D model was created using also the digital images, based on its characteristic points, resulting a CAD model, that was transformed into a mesh surface. Finally, the two 3D models were compared with the designed model, using the CloudCompare software, thus resulting the imperfections of the manufactured piece. The proposed method highlights the differences between the two models using a color palette, offering at the same time a global comparison.

  9. The Diagnostic Accuracy of Serologic and Molecular Methods for Detecting Visceral Leishmaniasis in HIV Infected Patients: Meta-Analysis

    PubMed Central

    Cota, Gláucia Fernandes; de Sousa, Marcos Roberto; Demarqui, Fábio Nogueira; Rabello, Ana

    2012-01-01

    Background Human visceral leishmaniasis (VL), a potentially fatal disease, has emerged as an important opportunistic condition in HIV infected patients. In immunocompromised patients, serological investigation is considered not an accurate diagnostic method for VL diagnosis and molecular techniques seem especially promising. Objective This work is a comprehensive systematic review and meta-analysis to evaluate the accuracy of serologic and molecular tests for VL diagnosis specifically in HIV-infected patients. Methods Two independent reviewers searched PubMed and LILACS databases. The quality of studies was assessed by QUADAS score. Sensitivity and specificity were pooled separately and compared with overall accuracy measures: diagnostic odds ratio (DOR) and symmetric summary receiver operating characteristic (sROC). Results Thirty three studies recruiting 1,489 patients were included. The following tests were evaluated: Immunofluorescence Antibody Test (IFAT), Enzyme linked immunosorbent assay (ELISA), immunoblotting (Blot), direct agglutination test (DAT) and polimerase chain reaction (PCR) in whole blood and bone marrow. Most studies were carried out in Europe. Serological tests varied widely in performance, but with overall limited sensitivity. IFAT had poor sensitivity ranging from 11% to 82%. DOR (95% confidence interval) was higher for DAT 36.01 (9.95–130.29) and Blot 27.51 (9.27–81.66) than for IFAT 7.43 (3.08–1791) and ELISA 3.06 (0.71–13.10). PCR in whole blood had the highest DOR: 400.35 (58.47–2741.42). The accuracy of PCR based on Q-point was 0.95; 95%CI 0.92–0.97, which means good overall performance. Conclusion Based mainly on evidence gained by infection with Leishmania infantum chagasi, serological tests should not be used to rule out a diagnosis of VL among the HIV-infected, but a positive test at even low titers has diagnostic value when combined with the clinical case definition. Considering the available evidence, tests based on DNA

  10. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory. PMID:14737929

  11. Accuracy of qualitative analysis for assessment of skilled baseball pitching technique.

    PubMed

    Nicholls, Rochelle; Fleisig, Glenn; Elliott, Bruce; Lyman, Stephen; Osinski, Edmund

    2003-07-01

    Baseball pitching must be performed with correct technique if injuries are to be avoided and performance maximized. High-speed video analysis is accepted as the most accurate and objective method for evaluation of baseball pitching mechanics. The aim of this research was to develop an equivalent qualitative analysis method for use with standard video equipment. A qualitative analysis protocol (QAP) was developed for 24 kinematic variables identified as important to pitching performance. Twenty male baseball pitchers were videotaped using 60 Hz camcorders, and their technique evaluated using the QAP, by two independent raters. Each pitcher was also assessed using a 6-camera 200 Hz Motion Analysis system (MAS). Four QAP variables (22%) showed significant similarity with MAS results. Inter-rater reliability showed agreement on 33% of QAP variables. It was concluded that a complete and accurate profile of an athlete's pitching mechanics cannot be made using the QAP in its current form, but it is possible such simple forms of biomechanical analysis could yield accurate results before 3-D methods become obligatory.

  12. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  13. Accuracy analysis of the Null-Screen method for the evaluation of flat heliostats

    NASA Astrophysics Data System (ADS)

    Cebrian-Xochihuila, P.; Huerta-Carranza, O.; Díaz-Uribe, R.

    2016-04-01

    In this work we develop an algorithm to determinate the accuracy of the Null-Screen Method, used for the testing of flat heliostats used as solar concentrators in a central tower configuration. We simulate the image obtained on a CCD camera when an orderly distribution of points are displayed on a Null-Screen perpendicular to the heliostat under test. The deformations present in the heliostat are represented as a cosine function of the position with different periods and amplitudes. As a resolution criterion, a deformation on the mirror can be detected when the differences in position between the spots on the image plane for the deformed surface as compared with those obtained for an ideally flat heliostat are equal to one pixel. For 6.4μm pixel size and 18mm focal length, the minimum deformation we can measure in the heliostat, correspond to amplitude equal a 122μm for a period equal to 1m; this is equivalent to 0.8mrad in slope. This result depends on the particular configuration used during the test and the size of the heliostat.

  14. A Method for Evaluating Timeliness and Accuracy of Volitional Motor Responses to Vibrotactile Stimuli.

    PubMed

    Leineweber, Matthew J; Shi, Sam; Andrysek, Jan

    2016-01-01

    Artificial sensory feedback (ASF) systems can be used to compensate for lost proprioception in individuals with lower-limb impairments. Effective design of these ASF systems requires an in-depth understanding of how the parameters of specific feedback mechanism affect user perception and reaction to stimuli. This article presents a method for applying vibrotactile stimuli to human participants and measuring their response. Rotating mass vibratory motors are placed at pre-defined locations on the participant's thigh, and controlled through custom hardware and software. The speed and accuracy of participants' volitional responses to vibrotactile stimuli are measured for researcher-specified combinations of motor placement and vibration frequency. While the protocol described here uses push-buttons to collect a simple binary response to the vibrotactile stimuli, the technique can be extended to other response mechanisms using inertial measurement units or pressure sensors to measure joint angle and weight bearing ratios, respectively. Similarly, the application of vibrotactile stimuli can be explored for body segments other than the thigh. PMID:27585366

  15. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique.

    PubMed

    Velazquez, L; Castro-Palacio, J C

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

  16. Understanding the accuracy of parental perceptions of child physical activity: a mixed methods analysis

    PubMed Central

    Kesten, Joanna M.; Jago, Russell; Sebire, Simon J.; Edwards, Mark J.; Pool, Laura; Zahra, Jesmond; Thompson, Janice L.

    2016-01-01

    Background Interventions to increase children’s physical activity (PA) have achieved limited success. This may be attributed to inaccurate parental perceptions of their children’s PA and a lack of recognition of a need to change activity levels. Methods Fifty-three parents participated in semi-structured interviews to determine perceptions of child PA. Perceptions were compared to children’s measured MVPA (classified as meeting or not meeting UK guidelines) to produce three categories: “accurate”, “over-estimate”, “under-estimate”. Deductive content analysis was performed to understand the accuracy of parental perceptions. Results All parents of children meeting the PA guidelines accurately perceived their child’s PA; whilst the majority of parents whose child did not meet the guidelines overestimated their PA. Most parents were unconcerned about their child’s PA level, viewing them as naturally active and willing to be active. Qualitative explanations for perceptions of insufficient activity included children having health problems and preferences for inactive pursuits, and parents having difficulty facilitating PA in poor weather and not always observing their child’s PA level. Social comparisons also influenced parental perceptions. Conclusions Strategies to improve parental awareness of child PA are needed. Perceptions of child PA may be informed by child “busyness”, being unaware of activity levels, and social comparisons. PMID:25872227

  17. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true

  18. Assessment of fine motor skill in musicians and nonmusicians: differences in timing versus sequence accuracy in a bimanual fingering task.

    PubMed

    Kincaid, Anthony E; Duncan, Scott; Scott, Samuel A

    2002-08-01

    While professional musicians are generally considered to possess better control of finger movements than nonmusicians, relatively few reports have experimentally addressed the nature of this discrepancy in fine motor skills. For example, it is unknown whether musicians perform with greater skill than control subjects in all aspects of different types of fine motor activities. More specifically, it is not known whether musicians perform better than control subjects on a fine motor task that is similar, but not identical, to the playing of their primary instrument. The purpose of this study was to examine the accuracy of finger placement and accuracy of timing in professional musicians and nonmusicians using a simple, rhythmical, bilateral fingering pattern and the technology that allowed separate assessment of these two parameters. Professional musicians (other than pianists) and nonmusicians were given identical, detailed and explicit instructions but not allowed physically to practice the finger pattern. After verbally repeating the correct pattern for the investigator, subjects performed the task on an electric keyboard with both hands simultaneously. Each subject's performance was then converted to a numerical score. While musicians clearly demonstrated better accuracy in timing, no significant difference was found between the groups in their finger placement scores. These findings were not correlated with subjects' age, sex, limb dominance, or primary instrument (for the professional musicians). This study indicates that professional musicians perform better in timing accuracy but not spatial accuracy while executing a simple, novel, bimanual motor sequence. PMID:12365261

  19. Accuracy of Cameriere, Haavikko, and Willems radiographic methods on age estimation on Bosnian-Herzegovian children age groups 6-13.

    PubMed

    Galić, Ivan; Vodanović, Marin; Cameriere, Roberto; Nakaš, Enita; Galić, Elizabeta; Selimović, Edin; Brkić, Hrvoje

    2011-03-01

    The aim of this cross-sectional study was to compare the accuracy of the Cameriere European formula (Cameriere), adopted Haavikko method from 1974 (Haavikko), and revisited Demirjian method by Willems (Willems) for age estimation on orthopantomograms (OPGs) of Bosnian-Herzegovian (BH) children age groups 6-13 years. The accuracy was determined as difference between estimated dental age (DA) and chronological age (CA) and the absolute accuracy (absolute difference) was assessed by analyzing OPGs of 591 girls and 498 boys. The Cameriere method overestimated the mean age by 0.09 year for girls and underestimated by -0.02 year for boys. The Haavikko method underestimated the mean age by -0.29 year for girls and -0.09 year for boys. The Willems method overestimated the mean age by 0.24 year in girls and by 0.42 year in boys. The absolute accuracies were 0.53 year for girls and 0.55 year for boys for Cameriere method; for Haavikko method, 0.59 year for girls and 0.62 year for boys; and for Willems method 0.69 year for girls and 0.67 year for boys. In conclusion, Cameriere method is the most accurate for estimating the age of BH children age groups 6-13 years using OPGs, following adopted Haavikko method and Willems method.

  20. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  1. Accuracy and uncertainty assessment on geostatistical simulation of soil salinity in a coastal farmland using auxiliary variable.

    PubMed

    Yao, R J; Yang, J S; Shao, H B

    2013-06-01

    Understanding the spatial soil salinity aids farmers and researchers in identifying areas in the field where special management practices are required. Apparent electrical conductivity measured by electromagnetic induction instrument in a fairly quick manner has been widely used to estimate spatial soil salinity. However, methods used for this purpose are mostly a series of interpolation algorithms. In this study, sequential Gaussian simulation (SGS) and sequential Gaussian co-simulation (SGCS) algorithms were applied for assessing the prediction accuracy and uncertainty of soil salinity with apparent electrical conductivity as auxiliary variable. Results showed that the spatial patterns of soil salinity generated by SGS and SGCS algorithms showed consistency with the measured values. The profile distribution of soil salinity was characterized by increasing with depth with medium salinization (ECe 4-8 dS/m) as the predominant salinization class. SGCS algorithm privileged SGS algorithm with smaller root mean square error according to the generated realizations. In addition, SGCS algorithm had larger proportions of true values falling within probability intervals and narrower range of probability intervals than SGS algorithm. We concluded that SGCS algorithm had better performance in modeling local uncertainty and propagating spatial uncertainty. The inclusion of auxiliary variable contributed to prediction capability and uncertainty modeling when using densely auxiliary variable as the covariate to predict the sparse target variable.

  2. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  3. Use of measurement uncertainty analysis to assess accuracy of carbon mass balance closure for a cellulase production process.

    PubMed

    Schell, Daniel J; Sáez, Juan Carlos; Hamilton, Jenny; Tholudur, Arun; McMillan, James D

    2002-01-01

    Closing carbon mass balances is a critical and necessary step for verifying the performance of any conversion process. We developed a methodology for calculating carbon mass balance closures for a cellulase production process and then applied measurement uncertainty analysis to calculate 95% confidence limits to assess the accuracy of the results. Cellulase production experiments were conducted in 7-L fermentors using Trichoderma reesei grown on pure cellulose (Solka-floc), glucose, or lactose. All input and output carbon-containing streams were measured and carbon dioxide in the exhaust gas was quantified using a mass spectrometer. On Solka-floc, carbon mass balances ranged from 90 to 100% closure for the first 48 h but increased to 101 to 135% closure from 72 h to the end of the cultivation at 168 h. Carbon mass balance closures for soluble sugar substrates ranged from 92 to 127% over the entire course of the cultivations. The 95% confidence intervals (CIs) for carbon mass balance closure were typically +/-11 to 12 percentage points after 48 h of cultivation. Many of the carbon mass balance results did not bracket 100% closure within the 95% CIs. These results suggest that measurement problems with the experimental or analytical methods may exist. This work shows that uncertainty analysis can be a useful diagnostic tool for identifying measurement problems in complex biochemical systems.

  4. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  5. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  6. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  7. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  8. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  9. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  10. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  11. Accuracy, Confidence, and Calibration: How Young Children and Adults Assess Credibility

    ERIC Educational Resources Information Center

    Tenney, Elizabeth R.; Small, Jenna E.; Kondrad, Robyn L.; Jaswal, Vikram K.; Spellman, Barbara A.

    2011-01-01

    Do children and adults use the same cues to judge whether someone is a reliable source of information? In 4 experiments, we investigated whether children (ages 5 and 6) and adults used information regarding accuracy, confidence, and calibration (i.e., how well an informant's confidence predicts the likelihood of being correct) to judge informants'…

  12. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID) Tracking Accuracy with the Kriging Method

    PubMed Central

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  13. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    PubMed

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  14. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  15. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  16. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  17. Accuracy of CBCT images in the assessment of buccal marginal alveolar peri-implant defects: effect of field of view

    PubMed Central

    Murat, S; Kılıç, C; Yüksel, S; Avsever, H; Farman, A; Scarfe, W C

    2014-01-01

    Objectives: To investigate the reliability and accuracy of cone beam CT (CBCT) images obtained at different fields of view in detecting and quantifying simulated buccal marginal alveolar peri-implant defects. Methods: Simulated buccal defects were prepared in 69 implants inserted into cadaver mandibles. CBCT images at three different fields of view were acquired: 40 × 40, 60 × 60 and 100 × 100 mm. The presence or absence of defects was assessed on three sets of images using a five-point scale by three observers. Observers also measured the depth, width and volume of defects on CBCT images, which were compared with physical measurements. The kappa value was calculated to assess intra- and interobserver agreement. Six-way repeated analysis of variance was used to evaluate treatment effects on the diagnosis. Pairwise comparisons of median true-positive and true-negative rates were calculated by the χ2 test. Pearson's correlation coefficient was used to determine the relationship between measurements. Significance level was set as p < 0.05. Results: All observers had excellent intra-observer agreement. Defect status (p < 0.001) and defect size (p < 0.001) factors were statistically significant. Pairwise interactions were found between defect status and defect size (p = 0.001). No differences between median true-positive or true-negative values were found between CBCT field of views (p > 0.05). Significant correlations were found between physical and CBCT measurements (p < 0.001). Conclusions: All CBCT images performed similarly for the detection of simulated buccal marginal alveolar peri-implant defects. Depth, width and volume measurements of the defects from various CBCT images correlated highly with physical measurements. PMID:24645965

  18. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  19. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  20. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  1. Methods of airway resistance assessment.

    PubMed

    Urbankowski, Tomasz; Przybyłowski, Tadeusz

    2016-01-01

    Airway resistance is the ratio of driving pressure to the rate of the airflow in the airways. The most frequent methods used to measure airway resistance are whole-body plethysmography, the interrupter technique and the forced oscillation technique. All these methods allow to measure resistance during respiration at the level close to tidal volume, they do not require forced breathing manoeuvres or deep breathing during measurement. The most popular method for measuring airway resistance is whole-body plethysmography. The results of plethysmography include among others the following parameters: airway resistance (Raw), airway conductance (Gaw), specific airway resistance (sRaw) and specific airway conductance (sGaw). The interrupter technique is based on the assumption that at the moment of airway occlusion, air pressure in the mouth is equal to the alveolar pressure . In the forced oscillation technique (FOT), airway resistance is calculated basing on the changes in pressure and flow caused by air vibration. The methods for measurement of airway resistance that are described in the present paper seem to be a useful alternative to the most common lung function test - spirometry. The target group in which these methods may be widely used are particularly the patients who are unable to perform spirometry.

  2. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  3. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  4. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  5. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  6. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  7. Diagnostic accuracy of emergency-performed focused assessment with sonography for trauma (FAST) in blunt abdominal trauma

    PubMed Central

    Ghafouri, Hamed Basir; Zare, Morteza; Bazrafshan, Azam; Modirian, Ehsan; Farahmand, Shervin; Abazarian, Niloofar

    2016-01-01

    Introduction Intra-abdominal hemorrhage due to blunt abdominal trauma is a major cause of trauma-related mortality. Therefore, any action taken for facilitating the diagnosis of intra-abdominal hemorrhage could save the lives of patients more effectively. The aim of this study was to determine the accuracy of focused assessment with sonography for trauma (FAST) performed by emergency physicians. Methods In this cross-sectional study from February 2011 to January 2012 at 7th Tir Hospital in Tehran (Iran), 120 patients with abdominal blunt trauma were chosen and evaluated for abdominal fluid. FAST sonography was performed for all the subjects by emergency residents and radiologists while they were blind to the other tests. Abdominal CTs, which is the gold standard, were done for all of the cases. SPSS 20.0 was used to analyze the results. Results During the study, 120 patients with abdominal blunt trauma were evaluated; the mean age of the patients was 33.0 ± 16.6 and the gender ratio was 3/1 (M/F). The results of FAST sonography by emergency physicians showed free fluid in the abdomen or pelvic spaces in 33 patients (27.5%), but this was not observed by the results of CT scans of six patients; sensitivity and specificity were 93.1 and 93.4%, respectively. As for tests performed by radiology residents, sensitivity was a bit higher (96.5%) with lower specificity (92.3%). Conclusion The results suggested that emergency physicians can use ultrasonography as a safe and reliable method in evaluating blunt abdominal trauma. PMID:27790349

  8. Enhancing Institutional Assessment Efforts through Qualitative Methods

    ERIC Educational Resources Information Center

    Van Note Chism, Nancy; Banta, Trudy W.

    2007-01-01

    Qualitative methods can do much to describe context and illuminate the why behind patterns encountered in institutional assessment. Alone, or in combination with quantitative methods, they should be the approach of choice for many of the most important assessment questions. (Contains 1 table.)

  9. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  10. On the Spatial and Temporal Accuracy of Overset Grid Methods for Moving Body Problems

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    A study of numerical attributes peculiar to an overset grid approach to unsteady aerodynamics prediction is presented. Attention is focused on the effect of spatial error associated with interpolation of intergrid boundary conditions and temporal error associated with explicit update of intergrid boundary points on overall solution accuracy. A set of numerical experiments are used to verify whether or not the use of simple interpolation for intergrid boundary conditions degrades the formal accuracy of a conventional second-order flow solver, and to quantify the error associated with explicit updating of intergrid boundary points. Test conditions correspond to the transonic regime. The validity of the numerical results presented here are established by comparison with existing numerical results of documented accuracy, and by direct comparison with experimental results.

  11. Screeners and brief assessment methods.

    PubMed

    Pérez Rodrigo, Carmen; Morán Fagúndez, Luis Juan; Riobó Serván, Pilar; Aranceta Bartrina, Javier

    2015-02-26

    In the last two decades easy-to-use simple instruments have been developed and validated to assess specific aspects of the diet or a general profile that can be compared with a reference dietary pattern as the Mediterranean Diet or with the recommendations of the Dietary Guidelines. Brief instruments are rapid, simple and easy to use tools that can be implemented by unskilled personnel without specific training. These tools are useful both in clinical settings and in Primary Health Care or in the community as a tool for triage, as a screening tool to identify individuals or groups of people at risk who require further care or even they have been used in studies to investigate associations between specific aspects of the diet and health outcomes. They are also used in interventions focused on changing eating behaviors as a diagnostic tool, for self-evaluation purposes, or to provide tailored advice in web based interventions or mobile apps. There are some specific instruments for use in children, adults, elderly or specific population groups.

  12. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  13. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2).

    PubMed

    Dirksen, Tim; De Lussanet, Marc H E; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  14. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  15. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  16. Assessment of User Home Location Geoinference Methods

    SciTech Connect

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.; Dowling, Chase P.; Cowell, Andrew J.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  17. Assessing the impacts of precipitation bias on distributed hydrologic model calibration and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Looper, Jonathan P.; Vieux, Baxter E.; Moreno, Maria A.

    2012-02-01

    SummaryPhysics-based distributed (PBD) hydrologic models predict runoff throughout a basin using the laws of conservation of mass and momentum, and benefit from more accurate and representative precipitation input. V flo™ is a gridded distributed hydrologic model that predicts runoff and continuously updates soil moisture. As a participating model in the second Distributed Model Intercomparison Project (DMIP2), V flo™ is applied to the Illinois and Blue River basins in Oklahoma. Model parameters are derived from geospatial data for initial setup, and then adjusted to reproduce the observed flow under continuous time-series simulations and on an event basis. Simulation results demonstrate that certain runoff events are governed by saturation excess processes, while in others, infiltration-rate excess processes dominate. Streamflow prediction accuracy is enhanced when multi-sensor precipitation estimates (MPE) are bias corrected through re-analysis of the MPE provided in the DMIP2 experiment, resulting in gauge-corrected precipitation estimates (GCPE). Model calibration identified a set of parameters that minimized objective functions for errors in runoff volume and instantaneous discharge. Simulated streamflow for the Blue and Illinois River basins, have Nash-Sutcliffe efficiency coefficients between 0.61 and 0.68, respectively, for the 1996-2002 period using GCPE. The streamflow prediction accuracy improves by 74% in terms of Nash-Sutcliffe efficiency when GCPE is used during the calibration period. Without model calibration, excellent agreement between hourly simulated and observed discharge is obtained for the Illinois, whereas in the Blue River, adjustment of parameters affecting both saturation and infiltration-rate excess processes were necessary. During the 1996-2002 period, GCPE input was more important than model calibration for the Blue River, while model calibration proved more important for the Illinois River. During the verification period (2002

  18. Assessing Accuracy of Exchange-Correlation Functionals for the Description of Atomic Excited States

    NASA Astrophysics Data System (ADS)

    Makowski, Marcin; Hanas, Martyna

    2016-09-01

    The performance of exchange-correlation functionals for the description of atomic excitations is investigated. A benchmark set of excited states is constructed and experimental data is compared to Time-Dependent Density Functional Theory (TDDFT) calculations. The benchmark results show that for the selected group of functionals good accuracy may be achieved and the quality of predictions provided is competitive to computationally more demanding coupled-cluster approaches. Apart from testing the standard TDDFT approaches, also the role of self-interaction error plaguing DFT calculations and the adiabatic approximation to the exchange-correlation kernels is given some insight.

  19. Assessment of the accuracy of infrared and electromagnetic navigation using an industrial robot: Which factors are influencing the accuracy of navigation?

    PubMed

    Liodakis, Emmanouil; Chu, Kongfai; Westphal, Ralf; Krettek, Christian; Citak, Musa; Gosling, Thomas; Kenawey, Mohamed

    2011-10-01

    Our objectives were to detect factors that influence the accuracy of surgical navigation (magnitude of deformity, plane of deformity, position of the navigation bases) and compare the accuracy of infrared with electromagnetic navigation. Human cadaveric femora were used. A robot connected with a computer moved one of the bony fragments in a desired direction. The bases of the infrared navigation (BrainLab) and the receivers of the electromagnetic device (Fastrak-Pohlemus) were attached to the proximal and distal parts of the bone. For the first part of the study, deformities were classified in eight groups (e.g., 0 to 5(°)). For the second part, the bases were initially placed near the osteotomy and then far away. The mean absolute differences between both navigation system measurements and the robotic angles were significantly affected by the magnitude of angulation with better accuracy for smaller angulations (p < 0.001). The accuracy of infrared navigation was significantly better in the frontal and sagittal plane. Changing the position of the navigation bases near and far away from the deformity apex had no significant effect on the accuracy of infrared navigation; however, it influenced the accuracy of electromagnetic navigation in the frontal plane (p < 0.001). In conclusion, the use of infrared navigation systems for corrections of small angulation-deformities in the frontal or sagittal plane provides the most accurate results, irrespectively from the positioning of the navigation bases.

  20. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  1. Accuracy of plant specimen disease severity estimates: concepts, history, methods, ramifications and challenges for the future

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...

  2. Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  3. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  4. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials. PMID:22416879

  5. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials.

  6. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  7. Accuracy Assessment of Geostationary-Earth-Orbit with Simplified Perturbations Models

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Xu, Xiaojun; Pang, Feng

    2016-06-01

    A two-line element set (TLE) is a data format encoding orbital elements of an Earth-orbiting object for a given epoch. Using suitable prediction formula, the motion state of the object can be obtained at any time. The TLE data representation is specific to the simplified perturbations models, so any algorithm using a TLE as a data source must implement one of these models to correctly compute the state at a specific time. Accurately adjustment of antenna direction on the earth station is the key to satellite communications. With the TLE set topocentric elevation and azimuth direction angles can be calculated. The accuracy of perturbations models directly affect communication signal quality. Therefore, finding the error variations of the satellite orbits is really meaningful. In this present paper, the authors investigate the accuracy of the Geostationary-Earth-Orbit (GEO) with simplified perturbations models. The coordinate residuals of the simplified perturbations models in this paper can give references for engineers to predict the satellite orbits with TLE.

  8. Group-Item and Directed Scanning: Examining Preschoolers' Accuracy and Efficiency in Two Augmentative Communication Symbol Selection Methods

    ERIC Educational Resources Information Center

    White, Aubrey Randall; Carney, Edward; Reichle, Joe

    2010-01-01

    Purpose: The current investigation compared directed scanning and group-item scanning among typically developing 4-year-old children. Of specific interest were their accuracy, selection speed, and efficiency of cursor movement in selecting colored line drawn symbols representing object vocabulary. Method: Twelve 4-year-olds made selections in both…

  9. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies

    SciTech Connect

    McKechnie, Scott; Booth, George H.; Cohen, Aron J.; Cole, Jacqueline M.

    2015-05-21

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  10. On the accuracy of density functional theory and wave function methods for calculating vertical ionization energies.

    PubMed

    McKechnie, Scott; Booth, George H; Cohen, Aron J; Cole, Jacqueline M

    2015-05-21

    The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density functional theory (DFT) and wave function methods: Hartree-Fock theory, second-order Møller-Plesset perturbation theory, and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionization energies obtained from total energy difference calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.

  11. Assessment of accuracy of adopted centre of mass corrections for the Etalon geodetic satellites

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Dunn, Peter; Otsubo, Toshimichi; Rodriguez, Jose

    2016-04-01

    Accurate centre-of-mass corrections are key parameters in the analysis of satellite laser ranging observations. In order to meet current accuracy requirements, the vector from the reflection point of a laser retroreflector array to the centre of mass of the orbiting spacecraft must be known with mm-level accuracy. In general, the centre-of-mass correction will be dependent on the characteristics of the target (geometry, construction materials, type of retroreflectors), the hardware employed by the tracking station (laser system, detector type), the intensity of the returned laser pulses, and the post-processing strategy employed to reduce the observations [1]. For the geodetic targets used by the ILRS to produce the SLR contribution to the ITRF, the LAGEOS and Etalon satellite pairs, there are centre-of-mass correction tables available for each tracking station [2]. These values are based on theoretical considerations, empirical determination of the optical response functions of each satellite, and knowledge of the tracking technology and return intensity employed [1]. Here we present results that put into question the accuracy of some of the current values for the centre-of-mass corrections of the Etalon satellites. We have computed weekly reference frame solutions using LAGEOS and Etalon observations for the period 1996-2014, estimating range bias parameters for each satellite type along with station coordinates. Analysis of the range bias time series reveals an unexplained, cm-level positive bias for the Etalon satellites in the case of most stations operating at high energy return levels. The time series of tracking stations that have undergone a transition from different modes of operation provide the evidence pointing to an inadequate centre-of-mass modelling. [1] Otsubo, T., and G.M. Appleby, System-dependent centre-of-mass correction for spherical geodetic satellites, J Geophys. Res., 108(B4), 2201, 2003 [2] Appleby, G.M., and T. Otsubo, Centre of Mass

  12. Accuracy assessment of photogrammetric digital elevation models generated for the Schultz Fire burn area

    NASA Astrophysics Data System (ADS)

    Muise, Danna K.

    This paper evaluates the accuracy of two digital photogrammetric software programs (ERDAS Imagine LPS and PCI Geomatica OrthoEngine) with respect to high-resolution terrain modeling in a complex topographic setting affected by fire and flooding. The site investigated is the 2010 Schultz Fire burn area, situated on the eastern edge of the San Francisco Peaks approximately 10 km northeast of Flagstaff, Arizona. Here, the fire coupled with monsoon rains typical of northern Arizona drastically altered the terrain of the steep mountainous slopes and residential areas below the burn area. To quantify these changes, high resolution (1 m and 3 m) digital elevation models (DEMs) were generated of the burn area using color stereoscopic aerial photographs taken at a scale of approximately 1:12000. Using a combination of pre-marked and post-marked ground control points (GCPs), I first used ERDAS Imagine LPS to generate a 3 m DEM covering 8365 ha of the affected area. This data was then compared to a reference DEM (USGS 10 m) to evaluate the accuracy of the resultant DEM. Findings were then divided into blunders (errors) and bias (slight differences) and further analyzed to determine if different factors (elevation, slope, aspect and burn severity) affected the accuracy of the DEM. Results indicated that both blunders and bias increased with an increase in slope, elevation and burn severity. It was also found that southern facing slopes contained the highest amount of bias while northern facing slopes contained the highest proportion of blunders. Further investigations compared a 1 m DEM generated using ERDAS Imagine LPS with a 1 m DEM generated using PCI Geomatica OrthoEngine for a specific region of the burn area. This area was limited to the overlap of two images due to OrthoEngine requiring at least three GCPs to be located in the overlap of the imagery. Results indicated that although LPS produced a less accurate DEM, it was much more flexible than OrthoEngine. It was also

  13. A SUB-PIXEL ACCURACY ASSESSMENT FRAMEWORK FOR DETERMINING LANDSAT TM DERIVED IMPERVIOUS SURFACE ESTIMATES.

    EPA Science Inventory

    The amount of impervious surface in a watershed is a landscape indicator integrating a number of concurrent interactions that influence a watershed's hydrology. Remote sensing data and techniques are viable tools to assess anthropogenic impervious surfaces. However a fundamental ...

  14. Assessment of surgical wounds in the home health patient: definitions and accuracy with OASIS-C.

    PubMed

    Trexler, Rhonda A

    2011-10-01

    The number of surgical patients receiving home care continues to grow as hospitals discharge patients sooner. Home health clinicians must gain knowledge of the wound healing stages and surgical wound classification to collect accurate data in the Outcome and Assessment Information Set-C (OASIS-C). This article provides the information clinicians need to accurately assess surgical wounds and implement best practices for improving surgical wounds in the home health patient.

  15. Assessment of the labelling accuracy of spanish semipreserved anchovies products by FINS (forensically informative nucleotide sequencing).

    PubMed

    Velasco, Amaya; Aldrey, Anxela; Pérez-Martín, Ricardo I; Sotelo, Carmen G

    2016-06-01

    Anchovies have been traditionally captured and processed for human consumption for millennia. In the case of Spain, ripened and salted anchovies are a delicacy, which, in some cases, can reach high commercial values. Although there have been a number of studies presenting DNA methodologies for the identification of anchovies, this is one of the first studies investigating the level of mislabelling in this kind of products in Europe. Sixty-three commercial semipreserved anchovy products were collected in different types of food markets in four Spanish cities to check labelling accuracy. Species determination in these commercial products was performed by sequencing two different cyt-b mitochondrial DNA fragments. Results revealed mislabelling levels higher than 15%, what authors consider relatively high considering the importance of the product. The most frequent substitute species was the Argentine anchovy, Engraulis anchoita, which can be interpreted as an economic fraud.

  16. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara.

    PubMed

    Jorge, María del Carmen; Williams, Barbara J; Garza-Hume, C E; Olvera, Arturo

    2011-09-13

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua-Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua-Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec-Acolhua work by several centuries. PMID:21876138

  17. Mathematical accuracy of Aztec land surveys assessed from records in the Codex Vergara

    PubMed Central

    Williams, Barbara J.; Garza-Hume, C. E.; Olvera, Arturo

    2011-01-01

    Land surveying in ancient states is documented not only for Eurasia but also for the Americas, amply attested by two Acolhua–Aztec pictorial manuscripts from the Valley of Mexico. The Codex Vergara and the Códice de Santa María Asunción consist of hundreds of drawings of agricultural fields that uniquely record surface areas as well as perimeter measurements. A previous study of the Codex Vergara examines how Acolhua–Aztecs determined field area by reconstructing their calculation procedures. Here we evaluate the accuracy of their area values using modern mathematics. The findings verify the overall mathematical validity of the codex records. Three-quarters of the areas are within 5% of the maximum possible value, and 85% are within 10%, which compares well with reported errors by Western surveyors that postdate Aztec–Acolhua work by several centuries. PMID:21876138

  18. The short- to medium-term predictive accuracy of static and dynamic risk assessment measures in a secure forensic hospital.

    PubMed

    Chu, Chi Meng; Thomas, Stuart D M; Ogloff, James R P; Daffern, Michael

    2013-04-01

    Although violence risk assessment knowledge and practice has advanced over the past few decades, it remains practically difficult to decide which measures clinicians should use to assess and make decisions about the violence potential of individuals on an ongoing basis, particularly in the short to medium term. Within this context, this study sought to compare the predictive accuracy of dynamic risk assessment measures for violence with static risk assessment measures over the short term (up to 1 month) and medium term (up to 6 months) in a forensic psychiatric inpatient setting. Results showed that dynamic measures were generally more accurate than static measures for short- to medium-term predictions of inpatient aggression. These findings highlight the necessity of using risk assessment measures that are sensitive to important clinical risk state variables to improve the short- to medium-term prediction of aggression within the forensic inpatient setting. Such knowledge can assist with the development of more accurate and efficient risk assessment procedures, including the selection of appropriate risk assessment instruments to manage and prevent the violence of offenders with mental illnesses during inpatient treatment.

  19. Assessment of the relationship between lesion segmentation accuracy and computer-aided diagnosis scheme performance

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Pu, Jiantao; Park, Sang Cheol; Zuley, Margarita; Gur, David

    2008-03-01

    In this study we randomly select 250 malignant and 250 benign mass regions as a training dataset. The boundary contours of these regions were manually identified and marked. Twelve image features were computed for each region. An artificial neural network (ANN) was trained as a classifier. To select a specific testing dataset, we applied a topographic multi-layer region growth algorithm to detect boundary contours of 1,903 mass regions in an initial pool of testing regions. All processed regions are sorted based on a size difference ratio between manual and automated segmentation. We selected a testing dataset involving 250 malignant and 250 benign mass regions with larger size difference ratios. Using the area under ROC curve (A Z value) as performance index we investigated the relationship between the accuracy of mass segmentation and the performance of a computer-aided diagnosis (CAD) scheme. CAD performance degrades as the size difference ratio increases. Then, we developed and tested a hybrid region growth algorithm that combined the topographic region growth with an active contour approach. In this hybrid algorithm, the boundary contour detected by the topographic region growth is used as the initial contour of the active contour algorithm. The algorithm iteratively searches for the optimal region boundaries. A CAD likelihood score of the growth region being a true-positive mass is computed in each iteration. The region growth is automatically terminated once the first maximum CAD score is reached. This hybrid region growth algorithm reduces the size difference ratios between two areas segmented automatically and manually to less than +/-15% for all testing regions and the testing A Z value increases to from 0.63 to 0.90. The results indicate that CAD performance heavily depends on the accuracy of mass segmentation. In order to achieve robust CAD performance, reducing lesion segmentation error is important.

  20. SherLoc2: a high-accuracy hybrid method for predicting subcellular localization of proteins.

    PubMed

    Briesemeister, Sebastian; Blum, Torsten; Brady, Scott; Lam, Yin; Kohlbacher, Oliver; Shatkay, Hagit

    2009-11-01

    SherLoc2 is a comprehensive high-accuracy subcellular localization prediction system. It is applicable to animal, fungal, and plant proteins and covers all main eukaryotic subcellular locations. SherLoc2 integrates several sequence-based features as well as text-based features. In addition, we incorporate phylogenetic profiles and Gene Ontology (GO) terms derived from the protein sequence to considerably improve the prediction performance. SherLoc2 achieves an overall classification accuracy of up to 93% in 5-fold cross-validation. A novel feature, DiaLoc, allows users to manually provide their current background knowledge by describing a protein in a short abstract which is then used to improve the prediction. SherLoc2 is available both as a free Web service and as a stand-alone version at http://www-bs.informatik.uni-tuebingen.de/Services/SherLoc2.

  1. Accuracy of Prediction Method of Cryogenic Tensile Strength for Austenitic Stainless Steels in ITER Toroidal Field Coil Structure

    NASA Astrophysics Data System (ADS)

    Sakurai, Takeru; Icuchi, Masahide; Nakahira, Masatake; Saito, Toru; Morimoto, Masaaki; Inagaki, Takashi; Hong, Yunseok; Matsui, Kunihiro; Hemmi, Tsutomu; Kajitani, Hideki; Koizumi, Norikiyo

    The Japan Atomic Energy Agency (JAEA) has developed the prediction method for yield stress and ultimate tensile strength at liquid helium temperature (4 K) using the quadratic curve as a function of the content of carbon and nitrogen. Prediction method was formulated based on the tensile strength data of materials with shape of rectangle. In this study, tensile strength of the forged materials with round bar and complex shape were obtained so as to compare with the predicted value. The accuracy of the prediction method was 10.2% of Yield Strength (YS), 2.5% of Ultimate Tensile Strength (UTS) when the prediction method was applied to round bar forged materials. By contrast, the accuracy about prediction method was 1.8% of YS, -0.8% of UTS when prediction method was applied to complex shape forged materials. It can be presumed the tendency of tensile strength other than materials with shape of rectangle. However, it was found accuracy of round bar is larger than other materials because of difference in the forging method."The views and opinions expressed herein do not necessarily reflect those of the ITER Organization"

  2. Accuracy of Orthomosaic Generated by Different Methods in Example of UAV Platform MUST Q

    NASA Astrophysics Data System (ADS)

    Liba, N.; Berg-Jürgens, J.

    2015-11-01

    Development of photogrammetry has reached a new level due to the use of unmanned aerial vehicles (UAV). In Estonia, the main areas of use of UAVs are monitoring overhead power lines for energy companies and fields in agriculture, and estimating the use of stockpile in mining. The project was carried out by the order of the City of Tartu for future road construction. In this research, automation of UAV platform MUST Q aerial image processing and reduction of time spent on the use of ground control points (GCP) is studied. For that two projects were created with software Pix4D. First one was processed automatically without GCP. Second one did use GCP, but all the processing was done automatically. As the result of the project, two orthomosaics with the pixel size of 5 cm were composed. Projects allowed ensuring accuracy limit of three times of the pixel size. The project that turned out to be the most accurate was the one using ground control points to do the levelling, which remained within the error limit allowed and the accuracy of the orthomosaic was 0.132 m. The project that didn't use ground control points had the accuracy of 1.417 m.

  3. Fixation method with high-orientation accuracy for optical terminals in space

    NASA Astrophysics Data System (ADS)

    Bauer, Dietrich; Lober, K.; Seeliger, Reinhard

    1991-12-01

    The LEO-Terminal of the ESA project SILEX (Semiconductor Intersateffite Link EXperiment) must provide a high pointing accuracy to be able to establish an optical inter orbit communication link. An alignment accuracy of 0.01° is required at the interface to the host satellite including the interface structure of the terminal to provide the necessary open loop pointing accuracy for acquisition of the counter terminal. Perturbations during launch, thermoelastic deformation and long term shrinking of the interface structure material CFRP are the determining sources of misalignment. The critical tilting of the interface structure can be reduced by means of slots for the fixation bolts, allowing translatoric movements at the interface fixation points in proper directions. Perpendicular movements to the selected direction (e.g. thermoelastic deformation and vibration during launch) must be strictly suppressed to avoid tilting in the plane of the attachment points at the interface satellite to terminal. Interface adapters at each connection will be used to overcome the problem of integrating the terminal with the tight toleranced fixation slots on the host satellite. One side of the adapters fixed to the terminal interface structure will provide the tight toleranced slots. The other side of the adapter will provide holes with sufficient large tolerances for the bolts used for the fixing of the terminal to the host satellite. These bolts are designed to fix the terminal rigidly under all environmental conditions by friction.

  4. Assessing weight perception accuracy to promote weight loss among U.S. female adolescents: A secondary analysis

    PubMed Central

    2010-01-01

    Background Overweight and obesity have become a global epidemic. The prevalence of overweight and obesity among U.S. adolescents has almost tripled in the last 30 years. Results from recent systematic reviews demonstrate that no single, particular intervention or strategy successfully assists overweight or obese adolescents in losing weight. An understanding of factors that influence healthy weight-loss behaviors among overweight and obese female adolescents promotes effective, multi-component weight-loss interventions. There is limited evidence demonstrating associations between demographic variables, body-mass index, and weight perception among female adolescents trying to lose weight. There is also a lack of previous studies examining the association of the accuracy of female adolescents' weight perception with their efforts to lose weight. This study, therefore, examined the associations of body-mass index, weight perception, and weight-perception accuracy with trying to lose weight and engaging in exercise as a weight-loss method among a representative sample of U.S. female adolescents. Methods A nonexperimental, descriptive, comparative secondary analysis design was conducted using data from Wave II (1996) of the National Longitudinal Study of Adolescent Health (Add Health). Data representative of U.S. female adolescents (N = 2216) were analyzed using STATA statistical software. Descriptive statistics and survey weight logistic regression were performed to determine if demographic and independent (body-mass index, weight perception, and weight perception accuracy) variables were associated with trying to lose weight and engaging in exercise as a weight-loss method. Results Age, Black or African American race, body-mass index, weight perception, and weight perceptions accuracy were consistently associated with the likeliness of trying to lose weight among U.S. female adolescents. Age, body-mass index, weight perception, and weight-perception accuracy were

  5. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  6. Modeling and Accuracy Assessment for 3D-VIRTUAL Reconstruction in Cultural Heritage Using Low-Cost Photogrammetry: Surveying of the "santa MARÍA Azogue" Church's Front

    NASA Astrophysics Data System (ADS)

    Robleda Prieto, G.; Pérez Ramos, A.

    2015-02-01

    Sometimes it could be difficult to represent "on paper" an architectural idea, a solution, a detail or a newly created element, depending on the complexity what it want be conveyed through its graphical representation but it may be even harder to represent the existing reality. (a building, a detail,...), at least with an acceptable degree of definition and accuracy. As a solution to this hypothetical problem, this paper try to show a methodology to collect measure data by combining different methods or techniques, to obtain the characteristic geometry of architectonic elements, especially in those highly decorated and/or complex geometry, as well as to assess the accuracy of the results obtained, but in an accuracy level enough and not very expensive costs. In addition, we can obtain a 3D recovery model that allows us a strong support, beyond point clouds obtained through another more expensive methods as using laser scanner, to obtain orthoimages. This methodology was used in the study case of the 3D-virtual reconstruction of a main medieval church façade because of the geometrical complexity in many elements as the existing main doorway with archivolts and many details, as well as the rose window located above it so it's inaccessible due to the height.

  7. Expanding Assessment Methods and Moments in History

    ERIC Educational Resources Information Center

    Frost, Jennifer; de Pont, Genevieve; Brailsford, Ian

    2012-01-01

    History courses at The University of Auckland are typically assessed at two or three moments during a semester. The methods used normally employ two essays and a written examination answering questions set by the lecturer. This study describes an assessment innovation in 2008 that expanded both the frequency and variety of activities completed by…

  8. Personality, Assessment Methods and Academic Performance

    ERIC Educational Resources Information Center

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  9. [A New Method to Decline the SWC Effect on the Accuracy for Monitoring SOM with Hyperspectral Technology].

    PubMed

    Wang, Chao; Feng, Mei-chen; Yang, Wu-de; Xiao, Lu-jie; Li, Guang-xin; Zhao, Jia-jia; Ren, Peng

    2015-12-01

    Soil organic matter (SOM) is one of the most important indexes to reflect the soil fertility, and soil moisture is a main factor to limit the application of hyperspectral technology in monitoring soil attributes. To study the effect of soil moisture on the accuracy for monitoring SOM with hyperspectral remote sensing and monitor the SOM quickly and accurately, SOM, soil water content (SWC) and soil spectrum for 151 natural soil samples in winter wheat field were measured and the soil samples were classified with the method of traditional classification of SWC and Normalized Difference Soil Moisture Index (NSMI) based on the hyperspectral technology. Moreover, the relationship among SWC, SOM and NSMI were analyzed. The results showed that the accuracy of spectral monitor for SOM among the classifications were significantly different, its accuracy was higher than the soils (5%-25%) which was not classified. It indicated that the soil moisture affected the accuracy for monitoring the SOM with hyperspectral technology and the study proved that the most beneficent soil water content for monitoring the SOM was less 10% and higher 20%. On the other hand, the four models for monitoring the SOM by the hyperspectral were constructed by the classification of NSMI, and its accuracy was higher than the classification of SWC. The models for monitoring the SOM by the classification of NSMI were calibrated with the validation parameters of R², RMSE and RPD, and it showed that the four models were available and reliable to quickly and conveniently monitor the SOM by heperspectral. However, the different classifiable ways for soil samples mentioned in the study were naturally similar as all soil samples were classified again with another way. Namely, there may be another optimal classifiable way or method to overcome and eliminate the SWC effect on the accuracy for monitoring SOM. The study will provide some theoretical technology to monitor the SWC and SOM by remote sensing.

  10. Scientific method, adversarial system, and technology assessment

    NASA Technical Reports Server (NTRS)

    Mayo, L. H.

    1975-01-01

    A basic framework is provided for the consideration of the purposes and techniques of scientific method and adversarial systems. Similarities and differences in these two techniques of inquiry are considered with reference to their relevance in the performance of assessments.

  11. EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS

    EPA Science Inventory

    NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...

  12. Methods for assessment of keel bone damage in poultry.

    PubMed

    Casey-Trott, T; Heerkens, J L T; Petrik, M; Regmi, P; Schrader, L; Toscano, M J; Widowski, T

    2015-10-01

    Keel bone damage (KBD) is a critical issue facing the laying hen industry today as a result of the likely pain leading to compromised welfare and the potential for reduced productivity. Recent reports suggest that damage, while highly variable and likely dependent on a host of factors, extends to all systems (including battery cages, furnished cages, and non-cage systems), genetic lines, and management styles. Despite the extent of the problem, the research community remains uncertain as to the causes and influencing factors of KBD. Although progress has been made investigating these factors, the overall effort is hindered by several issues related to the assessment of KBD, including quality and variation in the methods used between research groups. These issues prevent effective comparison of studies, as well as difficulties in identifying the presence of damage leading to poor accuracy and reliability. The current manuscript seeks to resolve these issues by offering precise definitions for types of KBD, reviewing methods for assessment, and providing recommendations that can improve the accuracy and reliability of those assessments. PMID:26287001

  13. Accuracy of Phenotypic Methods for Identification of Streptococcus pneumoniae Isolates Included in Surveillance Programs▿

    PubMed Central

    Richter, Sandra S.; Heilmann, Kristopher P.; Dohrn, Cassie L.; Riahi, Fathollah; Beekmann, Susan E.; Doern, Gary V.

    2008-01-01

    Similarities between Streptococcus pneumoniae and viridans group streptococci may result in misidentification of these organisms. In surveillance programs which assess antimicrobial resistance rates among respiratory tract pathogens, such identification errors could lead to overestimates of pneumococcal resistance rates. DNA probe analysis (Gen-Probe, San Diego, CA), the bile solubility test, optochin susceptibility, colony morphology, and the capsular swelling reaction with Omni serum (Staten Serum Institut, Copenhagen, Denmark) were used to characterize 1,733 organisms provisionally identified as S. pneumoniae in a 2004 to 2005 antimicrobial resistance surveillance program. These organisms were obtained in 41 U.S. medical centers. Among these, 1,647 (95%) were determined to be S. pneumoniae by DNA probe. Elimination of those isolates found not to be S. pneumoniae resulted in 1 to 2% decreases in resistance rate estimates with penicillin, erythromycin, tetracycline, and trimethoprim-sulfamethoxazole. With AccuProbe as a reference standard, the sensitivities and specificities of each phenotypic method for the identification of S. pneumoniae were, respectively, 98.8% and 82.6% for bile solubility, 99.3% and 74.4% for the capsular swelling reaction with Omni serum, and 87.9% and 59.3% for optochin susceptibility. Colony morphology was of limited value, as 391 (23.7%) isolates lacked the typical button or mucoid colony appearance of S. pneumoniae. PMID:18495854

  14. Influence of River Bed Elevation Survey Configurations and Interpolation Methods on the Accuracy of LIDAR Dtm-Based River Flow Simulations

    NASA Astrophysics Data System (ADS)

    Santillan, J. R.; Serviano, J. L.; Makinano-Santillan, M.; Marqueso, J. T.

    2016-09-01

    In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS), zig-zag (ZZ), river banks-centerline (RBCL), and river banks-centerline-zig-zag (RBCLZZ), and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging) were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK) as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs. The use of

  15. Assessment of Geometrical Accuracy of Multimodal Images Used for Treatment Planning in Stereotactic Radiotherapy and Radiosurgery: CT, MRI and PET

    SciTech Connect

    Garcia-Garduno, O. A.; Larraga-Gutierrez, J. M.; Celis, M. A.; Suarez-Campos, J. J.; Rodriguez-Villafuerte, M.; Martinez-Davalos, A.

    2006-09-08

    An acrylic phantom was designed and constructed to assess the geometrical accuracy of CT, MRI and PET images for stereotactic radiotherapy (SRT) and radiosurgery (SRS) applications. The phantom was suited for each image modality with a specific tracer and compared with CT images to measure the radial deviation between the reference marks in the phantom. It was found that for MRI the maximum mean deviation is 1.9 {+-} 0.2 mm compared to 2.4 {+-} 0.3 mm reported for PET. These results will be used for margin outlining in SRS and SRT treatment planning.

  16. Do Students Know What They Know? Exploring the Accuracy of Students' Self-Assessments

    ERIC Educational Resources Information Center

    Lindsey, Beth A.; Nagel, Megan L.

    2015-01-01

    We have conducted an investigation into how well students in introductory science classes (both physics and chemistry) are able to predict which questions they will or will not be able to answer correctly on an upcoming assessment. An examination of the data at the level of students' overall scores reveals results consistent with the…

  17. Exploring Writing Accuracy and Writing Complexity as Predictors of High-Stakes State Assessments

    ERIC Educational Resources Information Center

    Edman, Ellie Whitner

    2012-01-01

    The advent of No Child Left Behind led to increased teacher accountability for student performance and placed strict sanctions in place for failure to meet a certain level of performance each year. With instructional time at a premium, it is imperative that educators have brief academic assessments that accurately predict performance on…

  18. Image intensifier distortion correction for fluoroscopic RSA: the need for independent accuracy assessment.

    PubMed

    Kedgley, Angela E; Fox, Anne-Marie V; Jenkyn, Thomas R

    2012-01-01

    Fluoroscopic images suffer from multiple modes of image distortion. Therefore, the purpose of this study was to compare the effects of correction using a range of two-dimensional polynomials and a global approach. The primary measure of interest was the average error in the distances between four beads of an accuracy phantom, as measured using RSA. Secondary measures of interest were the root mean squared errors of the fit of the chosen polynomial to the grid of beads used for correction, and the errors in the corrected distances between the points of the grid in a second position. Based upon the two-dimensional measures, a polynomial of order three in the axis of correction and two in the perpendicular axis was preferred. However, based upon the RSA reconstruction, a polynomial of order three in the axis of correction and one in the perpendicular axis was preferred. The use of a calibration frame for these three-dimensional applications most likely tempers the effects of distortion. This study suggests that distortion correction should be validated for each of its applications with an independent "gold standard" phantom.

  19. Accuracy Assessments of Cloud Droplet Size Retrievals from Polarized Reflectance Measurements by the Research Scanning Polarimeter

    NASA Technical Reports Server (NTRS)

    Alexandrov, Mikhail Dmitrievic; Cairns, Brian; Emde, Claudia; Ackerman, Andrew S.; vanDiedenhove, Bastiaan

    2012-01-01

    We present an algorithm for the retrieval of cloud droplet size distribution parameters (effective radius and variance) from the Research Scanning Polarimeter (RSP) measurements. The RSP is an airborne prototype for the Aerosol Polarimetery Sensor (APS), which was on-board of the NASA Glory satellite. This instrument measures both polarized and total reflectance in 9 spectral channels with central wavelengths ranging from 410 to 2260 nm. The cloud droplet size retrievals use the polarized reflectance in the scattering angle range between 135deg and 165deg, where they exhibit the sharply defined structure known as the rain- or cloud-bow. The shape of the rainbow is determined mainly by the single scattering properties of cloud particles. This significantly simplifies both forward modeling and inversions, while also substantially reducing uncertainties caused by the aerosol loading and possible presence of undetected clouds nearby. In this study we present the accuracy evaluation of our algorithm based on the results of sensitivity tests performed using realistic simulated cloud radiation fields.

  20. Assessment of the accuracy of density functional theory for first principles simulations of water

    NASA Astrophysics Data System (ADS)

    Grossman, J. C.; Schwegler, E.; Draeger, E.; Gygi, F.; Galli, G.

    2004-03-01

    We present a series of Car-Parrinello (CP) molecular dynamics simulation in order to better understand the accuracy of density functional theory for the calculation of the properties of water [1]. Through 10 separate ab initio simulations, each for 20 ps of ``production'' time, a number of approximations are tested by varying the density functional employed, the fictitious electron mass, μ, in the CP Langrangian, the system size, and the ionic mass, M (we considered both H_2O and D_2O). We present the impact of these approximations on properties such as the radial distribution function [g(r)], structure factor [S(k)], diffusion coefficient and dipole moment. Our results show that structural properties may artificially depend on μ, and that in the case of an accurate description of the electronic ground state, and in the absence of proton quantum effects, we obtained an oxygen-oxygen correlation function that is over-structured compared to experiment, and a diffusion coefficient which is approximately 10 times smaller. ^1 J.C. Grossman et. al., J. Chem. Phys. (in press, 2004).