Science.gov

Sample records for accuracy assessment methods

  1. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  2. Accuracy of a semiquantitative method for Dermal Exposure Assessment (DREAM)

    PubMed Central

    van Wendel, de Joo... B; Vermeulen, R; van Hemmen, J J; Fransman, W; Kromhout, H

    2005-01-01

    Background: The authors recently developed a Dermal Exposure Assessment Method (DREAM), an observational semiquantitative method to assess dermal exposures by systematically evaluating exposure determinants using pre-assigned default values. Aim: To explore the accuracy of the DREAM method by comparing its estimates with quantitative dermal exposure measurements in several occupational settings. Methods: Occupational hygienists observed workers performing a certain task, whose exposure to chemical agents on skin or clothing was measured quantitatively simultaneously, and filled in the DREAM questionnaire. DREAM estimates were compared with measurement data by estimating Spearman correlation coefficients for each task and for individual observations. In addition, mixed linear regression models were used to study the effect of DREAM estimates on the variability in measured exposures between tasks, between workers, and from day to day. Results: For skin exposures, spearman correlation coefficients for individual observations ranged from 0.19 to 0.82. DREAM estimates for exposure levels on hands and forearms showed a fixed effect between and within surveys, explaining mainly between-task variance. In general, exposure levels on clothing layer were only predicted in a meaningful way by detailed DREAM estimates, which comprised detailed information on the concentration of the agent in the formulation to which exposure occurred. Conclusions: The authors expect that the DREAM method can be successfully applied for semiquantitative dermal exposure assessment in epidemiological and occupational hygiene surveys of groups of workers with considerable contrast in dermal exposure levels (variability between groups >1.0). For surveys with less contrasting exposure levels, quantitative dermal exposure measurements are preferable. PMID:16109819

  3. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  4. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  5. A novel method for assessing the 3-D orientation accuracy of inertial/magnetic sensors.

    PubMed

    Faber, Gert S; Chang, Chien-Chi; Rizun, Peter; Dennerlein, Jack T

    2013-10-18

    A novel method for assessing the accuracy of inertial/magnetic sensors is presented. The method, referred to as the "residual matrix" method, is advantageous because it decouples the sensor's error with respect to Earth's gravity vector (attitude residual error: pitch and roll) from the sensor's error with respect to magnetic north (heading residual error), while remaining insensitive to singularity problems when the second Euler rotation is close to ±90°. As a demonstration, the accuracy of an inertial/magnetic sensor mounted to a participant's forearm was evaluated during a reaching task in a laboratory. Sensor orientation was measured internally (by the inertial/magnetic sensor) and externally using an optoelectronic measurement system with a marker cluster rigidly attached to the sensor's enclosure. Roll, pitch and heading residuals were calculated using the proposed novel method, as well as using a common orientation assessment method where the residuals are defined as the difference between the Euler angles measured by the inertial sensor and those measured by the optoelectronic system. Using the proposed residual matrix method, the roll and pitch residuals remained less than 1° and, as expected, no statistically significant difference between these two measures of attitude accuracy was found; the heading residuals were significantly larger than the attitude residuals but remained below 2°. Using the direct Euler angle comparison method, the residuals were in general larger due to singularity issues, and the expected significant difference between inertial/magnetic sensor attitude and heading accuracy was not present. PMID:24016678

  6. Classification method, spectral diversity, band combination and accuracy assessment evaluation for urban feature detection

    NASA Astrophysics Data System (ADS)

    Erener, A.

    2013-04-01

    Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all

  7. A method to assess the accuracy of sonotubometry for detecting Eustachian tube openings.

    PubMed

    Swarts, J Douglas; Teixeira, Miriam S; Banks, Juliane; El-Wagaa, Jenna; Doyle, William J

    2015-09-01

    Sonotubometry is a simple test for Eustachian tube (ET) opening during a maneuver. Different sonotubometry configurations were suggested to maximize test accuracy, but no method has been described for comparing sonotubometry test results with those for a definitive measure of ET opening. Here, we present such a method and exemplify is use by an accuracy assessment of a simple sonotubometry configuration. A total of 502 data-sequences from 168 test sessions in 103 adult subjects were analyzed. For each session, subjects were seated in a pressure chamber and relative middle ear over- and under-pressures created by changing chamber pressure. At each pressure, the test sequence of bilateral tympanometry, bilateral sonotubometry while the subject swallowed twice, and bilateral tympanometry was done. Tympanometric data were expressed as the fractional gradient equilibrated (FGE) by swallowing and sonotubometric signals were analyzed to record the shape of detected sound signals. Tympanometric and sonotubometric tubal opening assignments were analyzed by cross-correlation. For the data sequences with FGE = 0 (n = 32) evidencing no tubal opening and one (n = 249) evidencing definitive tubal opening, detection of a sonotubometry sound signal during a swallow had a sensitivity and specificity of 74.2 and 65.6 % for identifying ET openings and an accuracy of 73.3 % for assigning ET opening/non-opening by swallowing. Measures of sound signal shape were significantly different between those groups. This protocol allows a sonotubometry accuracy assessment for detecting ET openings. For the test configuration used, accuracy was moderate, but this should improve as more sophisticated sonotubometry test configurations are evaluated. PMID:24710849

  8. Theory and methods for accuracy assessment of thematic maps using fuzzy sets

    SciTech Connect

    Gopal, S.; Woodcock, C. )

    1994-02-01

    The use of fuzzy sets in map accuracy assessment expands the amount of information that can be provided regarding the nature, frequency, magnitude, and source of errors in a thematic map. The need for using fuzzy sets arises from the observation that all map locations do not fit unambiguously in a single map category. Fuzzy sets allow for varying levels of set membership for multiple map categories. A linguistic measurement scale allows the kinds of comments commonly made during map evaluations to be used to quantify map accuracy. Four tables result from the use of fuzzy functions, and when taken together they provide more information than traditional confusion matrices. The use of a hypothetical dataset helps illustrate the benefits of the new methods. It is hoped that the enhanced ability to evaluate maps resulting from the use of fuzzy sets will improve our understanding of uncertainty in maps and facilitate improved error modeling. 40 refs.

  9. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    NASA Astrophysics Data System (ADS)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  10. A bootstrap method for assessing classification accuracy and confidence for agricultural land use mapping in Canada

    NASA Astrophysics Data System (ADS)

    Champagne, Catherine; McNairn, Heather; Daneshfar, Bahram; Shang, Jiali

    2014-06-01

    Land cover and land use classifications from remote sensing are increasingly becoming institutionalized framework data sets for monitoring environmental change. As such, the need for robust statements of classification accuracy is critical. This paper describes a method to estimate confidence in classification model accuracy using a bootstrap approach. Using this method, it was found that classification accuracy and confidence, while closely related, can be used in complementary ways to provide additional information on map accuracy and define groups of classes and to inform the future reference sampling strategies. Overall classification accuracy increases with an increase in the number of fields surveyed, where the width of classification confidence bounds decreases. Individual class accuracies and confidence were non-linearly related to the number of fields surveyed. Results indicate that some classes can be estimated accurately and confidently with fewer numbers of samples, whereas others require larger reference data sets to achieve satisfactory results. This approach is an improvement over other approaches for estimating class accuracy and confidence as it uses repetitive sampling to produce a more realistic estimate of the range in classification accuracy and confidence that can be obtained with different reference data inputs.

  11. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  12. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  13. Accuracy Assessment of Crown Delineation Methods for the Individual Trees Using LIDAR Data

    NASA Astrophysics Data System (ADS)

    Chang, K. T.; Lin, C.; Lin, Y. C.; Liu, J. K.

    2016-06-01

    Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called "pits") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.

  14. Accuracy assessment of the ERP prediction method based on analysis of 100-year ERP series

    NASA Astrophysics Data System (ADS)

    Malkin, Z.; Tissen, V. M.

    2012-12-01

    A new method has been developed at the Siberian Research Institute of Metrology (SNIIM) for highly accurate prediction of UT1 and Pole motion (PM). In this study, a detailed comparison was made of real-time UT1 predictions made in 2006-2011 and PMpredictions made in 2009-2011making use of the SNIIM method with simultaneous predictions computed at the International Earth Rotation and Reference Systems Service (IERS), USNO. Obtained results have shown that proposed method provides better accuracy at different prediction lengths.

  15. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  16. Assessing the Accuracy of the Tracer Dilution Method with Atmospheric Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Taylor, D.; Delkash, M.; Chow, F. K.; Imhoff, P. T.

    2015-12-01

    Landfill methane emissions are difficult to estimate due to limited observations and data uncertainty. The mobile tracer dilution method is a widely used and cost-effective approach for predicting landfill methane emissions. The method uses a tracer gas released on the surface of the landfill and measures the concentrations of both methane and the tracer gas downwind. Mobile measurements are conducted with a gas analyzer mounted on a vehicle to capture transects of both gas plumes. The idea behind the method is that if the measurements are performed far enough downwind, the methane plume from the large area source of the landfill and the tracer plume from a small number of point sources will be sufficiently well-mixed to behave similarly, and the ratio between the concentrations will be a good estimate of the ratio between the two emissions rates. The mobile tracer dilution method is sensitive to different factors of the setup such as placement of the tracer release locations and distance from the landfill to the downwind measurements, which have not been thoroughly examined. In this study, numerical modeling is used as an alternative to field measurements to study the sensitivity of the tracer dilution method and provide estimates of measurement accuracy. Using topography and wind conditions for an actual landfill, a landfill emissions rate is prescribed in the model and compared against the emissions rate predicted by application of the tracer dilution method. Two different methane emissions scenarios are simulated: homogeneous emissions over the entire surface of the landfill, and heterogeneous emissions with a hot spot containing 80% of the total emissions where the daily cover area is located. Numerical modeling of the tracer dilution method is a useful tool for evaluating the method without having the expense and labor commitment of multiple field campaigns. Factors tested include number of tracers, distance between tracers, distance from landfill to transect

  17. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  18. Assessing the accuracy of auralizations computed using a hybrid geometrical-acoustics and wave-acoustics method

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.; Takahashi, Kengo; Shimizu, Yasushi; Yamakawa, Takashi

    2001-05-01

    When based on geometrical acoustics, computational models used for auralization of auditorium sound fields are physically inaccurate at low frequencies. To increase accuracy while keeping computation tractable, hybrid methods using computational wave acoustics at low frequencies have been proposed and implemented in small enclosures such as simplified models of car cabins [Granier et al., J. Audio Eng. Soc. 44, 835-849 (1996)]. The present work extends such an approach to an actual 2400-m3 auditorium using the boundary-element method for frequencies below 100 Hz. The effect of including wave-acoustics at low frequencies is assessed by comparing the predictions of the hybrid model with those of the geometrical-acoustics model and comparing both with measurements. Conventional room-acoustical metrics are used together with new methods based on two-dimensional distance measures applied to time-frequency representations of impulse responses. Despite in situ measurements of boundary impedance, uncertainties in input parameters limit the accuracy of the computed results at low frequencies. However, aural perception ultimately defines the required accuracy of computational models. An algorithmic method for making such evaluations is proposed based on correlating listening-test results with distance measures between time-frequency representations derived from auditory models of the ear-brain system. Preliminary results are presented.

  19. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  20. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  1. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA.

    PubMed

    Miyata, Y; Suzuki, T; Takechi, M; Urano, H; Ide, S

    2015-07-01

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA. PMID:26233387

  2. Assessment of the accuracy of plasma shape reconstruction by the Cauchy condition surface method in JT-60SA

    SciTech Connect

    Miyata, Y.; Suzuki, T.; Takechi, M.; Urano, H.; Ide, S.

    2015-07-15

    For the purpose of stable plasma equilibrium control and detailed analysis, it is essential to reconstruct an accurate plasma boundary on the poloidal cross section in tokamak devices. The Cauchy condition surface (CCS) method is a numerical approach for calculating the spatial distribution of the magnetic flux outside a hypothetical surface and reconstructing the plasma boundary from the magnetic measurements located outside the plasma. The accuracy of the plasma shape reconstruction has been assessed by comparing the CCS method and an equilibrium calculation in JT-60SA with a high elongation and triangularity of plasma shape. The CCS, on which both Dirichlet and Neumann conditions are unknown, is defined as a hypothetical surface located inside the real plasma region. The accuracy of the plasma shape reconstruction is sensitive to the CCS free parameters such as the number of unknown parameters and the shape in JT-60SA. It is found that the optimum number of unknown parameters and the size of the CCS that minimizes errors in the reconstructed plasma shape are in proportion to the plasma size. Furthermore, it is shown that the accuracy of the plasma shape reconstruction is greatly improved using the optimum number of unknown parameters and shape of the CCS, and the reachable reconstruction errors in plasma shape and locations of strike points are within the target ranges in JT-60SA.

  3. Assessing the accuracy of the isotropic periodic sum method through Madelung energy computation.

    PubMed

    Ojeda-May, Pedro; Pu, Jingzhi

    2014-04-28

    We tested the isotropic periodic sum (IPS) method for computing Madelung energies of ionic crystals. The performance of the method, both in its nonpolar (IPSn) and polar (IPSp) forms, was compared with that of the zero-charge and Wolf potentials [D. Wolf, P. Keblinski, S. R. Phillpot, and J. Eggebrecht, J. Chem. Phys. 110, 8254 (1999)]. The results show that the IPSn and IPSp methods converge the Madelung energy to its reference value with an average deviation of ∼10(-4) and ∼10(-7) energy units, respectively, for a cutoff range of 18-24a (a/2 being the nearest-neighbor ion separation). However, minor oscillations were detected for the IPS methods when deviations of the computed Madelung energies were plotted on a logarithmic scale as a function of the cutoff distance. To remove such oscillations, we introduced a modified IPSn potential in which both the local-region and long-range electrostatic terms are damped, in analogy to the Wolf potential. With the damped-IPSn potential, a smoother convergence was achieved. In addition, we observed a better agreement between the damped-IPSn and IPSp methods, which suggests that damping the IPSn potential is in effect similar to adding a screening potential in IPSp. PMID:24784252

  4. Accuracy and Usefulness of Select Methods for Assessing Complete Collection of 24-Hour Urine: A Systematic Review.

    PubMed

    John, Katherine A; Cogswell, Mary E; Campbell, Norm R; Nowson, Caryl A; Legetic, Branka; Hennis, Anselm J M; Patel, Sheena M

    2016-05-01

    Twenty-four-hour urine collection is the recommended method for estimating sodium intake. To investigate the strengths and limitations of methods used to assess completion of 24-hour urine collection, the authors systematically reviewed the literature on the accuracy and usefulness of methods vs para-aminobenzoic acid (PABA) recovery (referent). The percentage of incomplete collections, based on PABA, was 6% to 47% (n=8 studies). The sensitivity and specificity for identifying incomplete collection using creatinine criteria (n=4 studies) was 6% to 63% and 57% to 99.7%, respectively. The most sensitive method for removing incomplete collections was a creatinine index <0.7. In pooled analysis (≥2 studies), mean urine creatinine excretion and volume were higher among participants with complete collection (P<.05); whereas, self-reported collection time did not differ by completion status. Compared with participants with incomplete collection, mean 24-hour sodium excretion was 19.6 mmol higher (n=1781 specimens, 5 studies) in patients with complete collection. Sodium excretion may be underestimated by inclusion of incomplete 24-hour urine collections. None of the current approaches reliably assess completion of 24-hour urine collection. PMID:26726000

  5. An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom

    SciTech Connect

    Arumugam, Sankar; Xing Aitang; Jameson, Michael G.; Holloway, Lois

    2013-03-15

    Purpose: Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top. Methods: The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone, seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system. Results: The phantom positioning was found to be within 0.04 ({sigma}= 0.12) Degree-Sign , 0.02 ({sigma}= 0.13) Degree-Sign , and -0.03 ({sigma}= 0.06) Degree-Sign in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of -0.2({sigma}= 0.4) mm in translational and -0.1({sigma}= 0.1) Degree-Sign in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of -0.3({sigma}= 0.9) mm and -1.4({sigma}= 1.7) Degree-Sign in translational

  6. Dynamic Accuracy of GPS Receivers for Use in Health Research: A Novel Method to Assess GPS Accuracy in Real-World Settings.

    PubMed

    Schipperijn, Jasper; Kerr, Jacqueline; Duncan, Scott; Madsen, Thomas; Klinker, Charlotte Demant; Troelsen, Jens

    2014-01-01

    The emergence of portable global positioning system (GPS) receivers over the last 10 years has provided researchers with a means to objectively assess spatial position in free-living conditions. However, the use of GPS in free-living conditions is not without challenges and the aim of this study was to test the dynamic accuracy of a portable GPS device under real-world environmental conditions, for four modes of transport, and using three data collection intervals. We selected four routes on different bearings, passing through a variation of environmental conditions in the City of Copenhagen, Denmark, to test the dynamic accuracy of the Qstarz BT-Q1000XT GPS device. Each route consisted of a walk, bicycle, and vehicle lane in each direction. The actual width of each walking, cycling, and vehicle lane was digitized as accurately as possible using ultra-high-resolution aerial photographs as background. For each trip, we calculated the percentage that actually fell within the lane polygon, and within the 2.5, 5, and 10 m buffers respectively, as well as the mean and median error in meters. Our results showed that 49.6% of all ≈68,000 GPS points fell within 2.5 m of the expected location, 78.7% fell within 10 m and the median error was 2.9 m. The median error during walking trips was 3.9, 2.0 m for bicycle trips, 1.5 m for bus, and 0.5 m for car. The different area types showed considerable variation in the median error: 0.7 m in open areas, 2.6 m in half-open areas, and 5.2 m in urban canyons. The dynamic spatial accuracy of the tested device is not perfect, but we feel that it is within acceptable limits for larger population studies. Longer recording periods, for a larger population are likely to reduce the potentially negative effects of measurement inaccuracy. Furthermore, special care should be taken when the environment in which the study takes place could compromise the GPS signal. PMID:24653984

  7. Skinfold Assessment: Accuracy and Application

    ERIC Educational Resources Information Center

    Ball, Stephen; Swan, Pamela D.; Altena, Thomas S.

    2006-01-01

    Although not perfect, skinfolds (SK), or the measurement of fat under the skin, remains the most popular and practical method available to assess body composition on a large scale (Kuczmarski, Flegal, Campbell, & Johnson, 1994). Even for practitioners who have been using SK for years and are highly proficient at locating the correct anatomical…

  8. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  9. Assessing the Accuracy of Two Enhanced Sampling Methods Using EGFR Kinase Transition Pathways: The Influence of Collective Variable Choice.

    PubMed

    Pan, Albert C; Weinreich, Thomas M; Shan, Yibing; Scarpazza, Daniele P; Shaw, David E

    2014-07-01

    Structurally elucidating transition pathways between protein conformations gives deep mechanistic insight into protein behavior but is typically difficult. Unbiased molecular dynamics (MD) simulations provide one solution, but their computational expense is often prohibitive, motivating the development of enhanced sampling methods that accelerate conformational changes in a given direction, embodied in a collective variable. The accuracy of such methods is unclear for complex protein transitions, because obtaining unbiased MD data for comparison is difficult. Here, we use long-time scale, unbiased MD simulations of epidermal growth factor receptor kinase deactivation as a complex biological test case for two widely used methods-steered molecular dynamics (SMD) and the string method. We found that common collective variable choices, based on the root-mean-square deviation (RMSD) of the entire protein, prevented the methods from producing accurate paths, even in SMD simulations on the time scale of the unbiased transition. Using collective variables based on the RMSD of the region of the protein known to be important for the conformational change, however, enabled both methods to provide a more accurate description of the pathway in a fraction of the simulation time required to observe the unbiased transition. PMID:26586510

  10. Positional Accuracy Assessment of the Openstreetmap Buildings Layer Through Automatic Homologous Pairs Detection: the Method and a Case Study

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.; Zamboni, G.

    2016-06-01

    OpenStreetMap (OSM) is currently the largest openly licensed collection of geospatial data. Being OSM increasingly exploited in a variety of applications, research has placed great attention on the assessment of its quality. This work focuses on assessing the quality of OSM buildings. While most of the studies available in literature are limited to the evaluation of OSM building completeness, this work proposes an original approach to assess the positional accuracy of OSM buildings based on comparison with a reference dataset. The comparison relies on a quasi-automated detection of homologous pairs on the two datasets. Based on the homologous pairs found, warping algorithms like e.g. affine transformations and multi-resolution splines can be applied to the OSM buildings to generate a new version having an optimal local match to the reference layer. A quality assessment of the OSM buildings of Milan Municipality (Northern Italy), having an area of about 180 km2, is then presented. After computing some measures of completeness, the algorithm based on homologous points is run using the building layer of the official vector cartography of Milan Municipality as the reference dataset. Approximately 100000 homologous points are found, which show a systematic translation of about 0.4 m on both the X and Y directions and a mean distance of about 0.8 m between the datasets. Besides its efficiency and high degree of automation, the algorithm generates a warped version of OSM buildings which, having by definition a closest match to the reference buildings, can be eventually integrated in the OSM database.

  11. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  12. A Novel Method for Assessing the Accuracies of In Situ Measurements of Water Vapor in the UT/LS

    NASA Astrophysics Data System (ADS)

    Toohey, D.; Avallone, L.; Ross, M.

    2008-12-01

    We report on results from a series of flights of the NASA WB57F Aircraft into the exhaust plumes of rockets as part of the Plume Ultrafast Measurements Aquisition (PUMA) campaign. It is found that the emission ratio of vapor vapor to CO2, along with highly accurate measurements of CO2 can be used to constrain the abundances of H2O in the plume, such that the highly linear correlation between these two species can be used to determine the accuracies of total H2O measurements in the very dry upper troposphere and lowermost stratosphere. In addition, as the plume disperses evaporation of ice provides a fundamental thermodynamic constraint on water vapor abundances that is an independent test of instrument response. These plume observations provide a unique in situ, flight-based test of instruments to a level of accuracy that is very likely not possible in other types of measurement programs, such as in-flight intercomparisons or comprehensive laboratory calibrations. We propose a low-cost program focusing on flights through plumes of rockets and aircraft in the UT/LS that could resolve the longstanding disagreements between different in situ water vapor instruments.

  13. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  14. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  15. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  16. Accuracy of the Kato-Katz method and formalin-ether concentration technique for the diagnosis of Clonorchis sinensis, and implication for assessing drug efficacy

    PubMed Central

    2013-01-01

    Background Clonorchiasis is a chronic neglected disease caused by a liver fluke, Clonorchis sinensis. Chemotherapy is the mainstay of control and treatment efficacy is usually determined by microscopic examination of fecal samples. We assessed the diagnostic accuracy of the Kato-Katz method and the formalin-ether concentration technique (FECT) for C. sinensis diagnosis, and studied the effect of diagnostic approach on drug efficacy evaluation. Methods Overall, 74 individuals aged ≥18 years with a parasitological confirmed C. sinensis infection at baseline were re-examined 3 weeks after treatment. Before and after treatment, two stool samples were obtained from each participant and each sample was subjected to triplicate Kato-Katz thick smears and a single FECT examination. Results Thirty-eight individuals were still positive for C. sinensis according to our diagnostic ‘gold’ standard (six Kato-Katz thick smears plus two FECT). Two FECT had a significantly lower sensitivity than six Kato-Katz thick smears (44.7% versus 92.1%; p <0.001). Examination of single Kato-Katz and single FECT considerably overestimated cure rates. Conclusions In settings where molecular diagnostic assays are absent, multiple Kato-Katz thick smears should be examined for an accurate diagnosis of C. sinensis infection and for assessing drug efficacy against this liver fluke infection. PMID:24499644

  17. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  18. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  19. The accuracy of breast volume measurement methods: A systematic review.

    PubMed

    Choppin, S B; Wheat, J S; Gee, M; Goyal, A

    2016-08-01

    Breast volume is a key metric in breast surgery and there are a number of different methods which measure it. However, a lack of knowledge regarding a method's accuracy and comparability has made it difficult to establish a clinical standard. We have performed a systematic review of the literature to examine the various techniques for measurement of breast volume and to assess their accuracy and usefulness in clinical practice. Each of the fifteen studies we identified had more than ten live participants and assessed volume measurement accuracy using a gold-standard based on the volume, or mass, of a mastectomy specimen. Many of the studies from this review report large (>200 ml) uncertainty in breast volume and many fail to assess measurement accuracy using appropriate statistical tools. Of the methods assessed, MRI scanning consistently demonstrated the highest accuracy with three studies reporting errors lower than 10% for small (250 ml), medium (500 ml) and large (1000 ml) breasts. However, as a high-cost, non-routine assessment other methods may be more appropriate. PMID:27288864

  20. Accuracy assessment of a marker-free method for registration of CT and stereo images applied in image-guided implantology: a phantom study.

    PubMed

    Mohagheghi, Saeed; Ahmadian, Alireza; Yaghoobee, Siamak

    2014-12-01

    To assess the accuracy of a proposed marker-free registration method as opposed to the conventional marker-based method using an image-guided dental system, and investigating the best configurations of anatomical landmarks for various surgical fields in a phantom study, a CT-compatible dental phantom consisting of implanted targets was used. Two marker-free registration methods were evaluated, first using dental anatomical landmarks and second, using a reference marker tool. Six implanted markers, distributed in the inner space of the phantom were used as the targets; the values of target registration error (TRE) for each target were measured and compared with the marker-based method. Then, the effects of different landmark configurations on TRE values, measured using the Parsiss IV Guided Navigation system (Parsiss, Tehran, Iran), were investigated to find the best landmark arrangement for reaching the minimum registration error in each target region. It was proved that marker-free registration can be as precise as the marker-based method. This has a great impact on image-guided implantology systems whereby the drawbacks of fiducial markers for patient and surgeon are removed. It was also shown that smaller values of TRE could be achieved by using appropriate landmark configurations and moving the center of the landmark set closer to the surgery target. Other common factors would not necessarily decrease the TRE value so the conventional rules accepted in the clinical community about the ways to reduce TRE should be adapted to the selected field of dental surgery. PMID:25441868

  1. Accuracy of telepsychiatric assessment of new routine outpatient referrals

    PubMed Central

    Singh, Surendra P; Arya, Dinesh; Peters, Trish

    2007-01-01

    Background Studies on the feasibility of telepsychiatry tend to concentrate only on a subset of clinical parameters. In contrast, this study utilises data from a comprehensive assessment. The main objective of this study is to compare the accuracy of findings from telepsychiatry with those from face to face interviews. Method This is a primary, cross-sectional, single-cluster, balanced crossover, blind study involving new routine psychiatric referrals. Thirty-seven out of forty cases fulfilling the selection criteria went through a complete set of independent face to face and video assessments by the researchers who were blind to each other's findings. Results The accuracy ratio of the pooled results for DSM-IV diagnoses, risk assessment, non-drug and drug interventions were all above 0.76, and the combined overall accuracy ratio was 0.81. There were substantial intermethod agreements for Cohen's kappa on all the major components of evaluation except on the Risk Assessment Scale where there was only weak agreement. Conclusion Telepsychiatric assessment is a dependable method of assessment with a high degree of accuracy and substantial overall intermethod agreement when compared with standard face to face interview for new routine outpatient psychiatric referrals. PMID:17919329

  2. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  3. Laser focus positioning method with submicrometer accuracy.

    PubMed

    Alexeev, Ilya; Strauss, Johannes; Gröschl, Andreas; Cvecek, Kristian; Schmidt, Michael

    2013-01-20

    Accurate positioning of a sample is one of the primary challenges in laser micromanufacturing. There are a number of methods that allow detection of the surface position; however, only a few of them use the beam of the processing laser as a basis for the measurement. Those methods have an advantage that any changes in the processing laser beam can be inherently accommodated. This work describes a direct, contact-free method to accurately determine workpiece position with respect to the structuring laser beam focal plane based on nonlinear harmonic generation. The method makes workpiece alignment precise and time efficient due to ease of automation and provides the repeatability and accuracy of the surface detection of less than 1 μm. PMID:23338188

  4. Accuracy Assessment in Structure from Motion 3d Reconstruction from Uav-Born Images: the Influence of the Data Processing Methods

    NASA Astrophysics Data System (ADS)

    Caroti, G.; Martínez-Espejo Zaragoza, I.; Piemonte, A.

    2015-08-01

    The evolution of Structure from Motion (SfM) techniques and their integration with the established procedures of classic stereoscopic photogrammetric survey have provided a very effective tool for the production of three-dimensional textured models. Such models are not only aesthetically pleasing but can also contain metric information, the quality of which depends on both survey type and applied processing methodologies. An open research topic in this area refers to checking attainable accuracy levels. The knowledge of such accuracy is essential, especially in the integration of models obtained through SfM with other models derived from different sensors or methods (laser scanning, classic photogrammetry ...). Accuracy checks may be conducted by either comparing SfM models against a reference one or measuring the deviation of control points identified on models and measured with classic topographic instrumentation and methodologies. This paper presents an analysis of attainable accuracy levels, according to different approaches of survey and data processing. For this purpose, a survey of the Church of San Miniato in Marcianella (Pisa, Italy), has been used. The dataset is an integration of laser scanning with terrestrial and UAV-borne photogrammetric surveys; in addition, a high precision topographic network was established for the specific purpose. In particular, laser scanning has been used for the interior and the exterior of the church, with the exclusion of the roof, while UAVs have been used for the photogrammetric survey of both roof, with horizontal strips, and façade, with vertical strips.

  5. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  6. [Navigation in implantology: Accuracy assessment regarding the literature].

    PubMed

    Barrak, Ibrahim Ádám; Varga, Endre; Piffko, József

    2016-06-01

    Our objective was to assess the literature regarding the accuracy of the different static guided systems. After applying electronic literature search we found 661 articles. After reviewing 139 articles, the authors chose 52 articles for full-text evaluation. 24 studies involved accuracy measurements. Fourteen of our selected references were clinical and ten of them were in vitro (modell or cadaver). Variance-analysis (Tukey's post-hoc test; p < 0.05) was conducted to summarize the selected publications. Regarding 2819 results the average mean error at the entry point was 0.98 mm. At the level of the apex the average deviation was 1.29 mm while the mean of the angular deviation was 3,96 degrees. Significant difference could be observed between the two methods of implant placement (partially and fully guided sequence) in terms of deviation at the entry point, apex and angular deviation. Different levels of quality and quantity of evidence were available for assessing the accuracy of the different computer-assisted implant placement. The rapidly evolving field of digital dentistry and the new developments will further improve the accuracy of guided implant placement. In the interest of being able to draw dependable conclusions and for the further evaluation of the parameters used for accuracy measurements, randomized, controlled single or multi-centered clinical trials are necessary. PMID:27544966

  7. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  8. Rigorous A-Posteriori Assessment of Accuracy in EMG Decomposition

    PubMed Central

    McGill, Kevin C.; Marateb, Hamid R.

    2010-01-01

    If EMG decomposition is to be a useful tool for scientific investigation, it is essential to know that the results are accurate. Because of background noise, waveform variability, motor-unit action potential (MUAP) indistinguishability, and perplexing superpositions, accuracy assessment is not straightforward. This paper presents a rigorous statistical method for assessing decomposition accuracy based only on evidence from the signal itself. The method uses statistical decision theory in a Bayesian framework to integrate all the shape- and firing-time-related information in the signal to compute an objective a-posteriori measure of confidence in the accuracy of each discharge in the decomposition. The assessment is based on the estimated statistical properties of the MUAPs and noise and takes into account the relative likelihood of every other possible decomposition. The method was tested on 3 pairs of real EMG signals containing 4–7 active MUAP trains per signal that had been decomposed by a human expert. It rated 97% of the identified MUAP discharges as accurate to within ±0.5 ms with a confidence level of 99%, and detected 6 decomposition errors. Cross-checking between signal pairs verified all but 2 of these assertions. These results demonstrate that the approach is reliable and practical for real EMG signals. PMID:20639182

  9. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL

  10. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  11. DESIGNA ND ANALYSIS FOR THEMATIC MAP ACCURACY ASSESSMENT: FUNDAMENTAL PRINCIPLES

    EPA Science Inventory

    Before being used in scientific investigations and policy decisions, thematic maps constructed from remotely sensed data should be subjected to a statistically rigorous accuracy assessment. The three basic components of an accuracy assessment are: 1) the sampling design used to s...

  12. Assessing accuracy of an electronic provincial medication repository

    PubMed Central

    2012-01-01

    Background Jurisdictional drug information systems are being implemented in many regions around the world. British Columbia, Canada has had a provincial medication dispensing record, PharmaNet, system since 1995. Little is known about how accurately PharmaNet reflects actual medication usage. Methods This prospective, multi-centre study compared pharmacist collected Best Possible Medication Histories (BPMH) to PharmaNet profiles to assess accuracy of the PharmaNet profiles for patients receiving a BPMH as part of clinical care. A review panel examined the anonymized BPMHs and discrepancies to estimate clinical significance of discrepancies. Results 16% of medication profiles were accurate, with 48% of the discrepant profiles considered potentially clinically significant by the clinical review panel. Cardiac medications tended to be more accurate (e.g. ramipril was accurate >90% of the time), while insulin, warfarin, salbutamol and pain relief medications were often inaccurate (80–85% of the time). 1215 sequential BPMHs were collected and reviewed for this study. Conclusions The PharmaNet medication repository has a low accuracy and should be used in conjunction with other sources for medication histories for clinical or research purposes. This finding is consistent with other, smaller medication repository accuracy studies in other jurisdictions. Our study highlights specific medications that tend to be lower in accuracy. PMID:22621690

  13. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  14. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  15. Assessing the accuracy of prediction algorithms for classification: an overview.

    PubMed

    Baldi, P; Brunak, S; Chauvin, Y; Andersen, C A; Nielsen, H

    2000-05-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, and correlation coefficients, and to information theoretic measures such as relative entropy and mutual information. We briefly discuss the advantages and disadvantages of each approach. For classification tasks, we derive new learning algorithms for the design of prediction systems by directly optimising the correlation coefficient. We observe and prove several results relating sensitivity and specificity of optimal systems. While the principles are general, we illustrate the applicability on specific problems such as protein secondary structure and signal peptide prediction. PMID:10871264

  16. Assessing the accuracy of different simplified frictional rolling contact algorithms

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Iwnicki, S. D.; Xie, G.; Shackleton, P.

    2012-01-01

    This paper presents an approach for assessing the accuracy of different frictional rolling contact theories. The main characteristic of the approach is that it takes a statistically oriented view. This yields a better insight into the behaviour of the methods in diverse circumstances (varying contact patch ellipticities, mixed longitudinal, lateral and spin creepages) than is obtained when only a small number of (basic) circumstances are used in the comparison. The range of contact parameters that occur for realistic vehicles and tracks are assessed using simulations with the Vampire vehicle system dynamics (VSD) package. This shows that larger values for the spin creepage occur rather frequently. Based on this, our approach is applied to typical cases for which railway VSD packages are used. The results show that particularly the USETAB approach but also FASTSIM give considerably better results than the linear theory, Vermeulen-Johnson, Shen-Hedrick-Elkins and Polach methods, when compared with the 'complete theory' of the CONTACT program.

  17. Assessing the performance of the MM/PBSA and MM/GBSA methods: I. The accuracy of binding free energy calculations based on molecular dynamics simulations

    PubMed Central

    Hou, Tingjun; Wang, Junmei; Li, Youyong; Wang, Wei

    2011-01-01

    The Molecular Mechanics/Poisson Boltzmann Surface Area (MM/PBSA) and the Molecular Mechanics/Generalized Born Surface Area (MM/GBSA) methods calculate binding free energies for macromolecules by combining molecular mechanics calculations and continuum solvation models. To systematically evaluate the performance of these methods, we report here an extensive study of 59 ligands interacting with six different proteins. First, we explored the effects of the length of the molecular dynamics (MD) simulation, ranging from 400 to 4800 ps, and the solute dielectric constant (1, 2 or 4) to the binding free energies predicted by MM/PBSA. The following three important conclusions could be observed: (1). MD simulation lengths have obvious impact on the predictions, and longer MD simulations are not always necessary to achieve better predictions; (2). The predictions are quite sensitive to solute dielectric constant, and this parameter should be carefully determined according to the characteristics of the protein/ligand binding interface; (3). Conformational entropy showed large fluctuations in MD trajectories and a large number of snapshots are necessary to achieve stable predictions. Next, we evaluated the accuracy of the binding free energies calculated by three Generalized Born (GB) models. We found that the GB model developed by Onufriev and Case was the most successful model in ranking the binding affinities of the studied inhibitors. Finally, we evaluated the performance of MM/GBSA and MM/PBSA in predicting binding free energies. Our results showed that MM/PBSA performed better in calculating absolute, but not necessarily relative, binding free energies than MM/GBSA. Considering its computational efficiency, MM/GBSA can serve as a powerful tool in drug design, where correct ranking of inhibitors is often emphasized. PMID:21117705

  18. Accuracy of commercial geocoding: assessment and implications

    PubMed Central

    Whitsel, Eric A; Quibrera, P Miguel; Smith, Richard L; Catellier, Diane J; Liao, Duanping; Henley, Amanda C; Heiss, Gerardo

    2006-01-01

    Background Published studies of geocoding accuracy often focus on a single geographic area, address source or vendor, do not adjust accuracy measures for address characteristics, and do not examine effects of inaccuracy on exposure measures. We addressed these issues in a Women's Health Initiative ancillary study, the Environmental Epidemiology of Arrhythmogenesis in WHI. Results Addresses in 49 U.S. states (n = 3,615) with established coordinates were geocoded by four vendors (A-D). There were important differences among vendors in address match rate (98%; 82%; 81%; 30%), concordance between established and vendor-assigned census tracts (85%; 88%; 87%; 98%) and distance between established and vendor-assigned coordinates (mean ρ [meters]: 1809; 748; 704; 228). Mean ρ was lowest among street-matched, complete, zip-coded, unedited and urban addresses, and addresses with North American Datum of 1983 or World Geodetic System of 1984 coordinates. In mixed models restricted to vendors with minimally acceptable match rates (A-C) and adjusted for address characteristics, within-address correlation, and among-vendor heteroscedasticity of ρ, differences in mean ρ were small for street-type matches (280; 268; 275), i.e. likely to bias results relying on them about equally for most applications. In contrast, differences between centroid-type matches were substantial in some vendor contrasts, but not others (5497; 4303; 4210) pinteraction < 10-4, i.e. more likely to bias results differently in many applications. The adjusted odds of an address match was higher for vendor A versus C (odds ratio = 66, 95% confidence interval: 47, 93), but not B versus C (OR = 1.1, 95% CI: 0.9, 1.3). That of census tract concordance was no higher for vendor A versus C (OR = 1.0, 95% CI: 0.9, 1.2) or B versus C (OR = 1.1, 95% CI: 0.9, 1.3). Misclassification of a related exposure measure – distance to the nearest highway – increased with mean ρ and in the absence of confounding, non

  19. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  20. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  1. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions – Changes in Accuracy over Time

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2015-01-01

    Background Interest in 3D inertial motion tracking devices (AHRS) has been growing rapidly among the biomechanical community. Although the convenience of such tracking devices seems to open a whole new world of possibilities for evaluation in clinical biomechanics, its limitations haven’t been extensively documented. The objectives of this study are: 1) to assess the change in absolute and relative accuracy of multiple units of 3 commercially available AHRS over time; and 2) to identify different sources of errors affecting AHRS accuracy and to document how they may affect the measurements over time. Methods This study used an instrumented Gimbal table on which AHRS modules were carefully attached and put through a series of velocity-controlled sustained motions including 2 minutes motion trials (2MT) and 12 minutes multiple dynamic phases motion trials (12MDP). Absolute accuracy was assessed by comparison of the AHRS orientation measurements to those of an optical gold standard. Relative accuracy was evaluated using the variation in relative orientation between modules during the trials. Findings Both absolute and relative accuracy decreased over time during 2MT. 12MDP trials showed a significant decrease in accuracy over multiple phases, but accuracy could be enhanced significantly by resetting the reference point and/or compensating for initial Inertial frame estimation reference for each phase. Interpretation The variation in AHRS accuracy observed between the different systems and with time can be attributed in part to the dynamic estimation error, but also and foremost, to the ability of AHRS units to locate the same Inertial frame. Conclusions Mean accuracies obtained under the Gimbal table sustained conditions of motion suggest that AHRS are promising tools for clinical mobility assessment under constrained conditions of use. However, improvement in magnetic compensation and alignment between AHRS modules are desirable in order for AHRS to reach their

  2. Assessing the Accuracy of Various Ab Initio Methods for Geometries and Excitation Energies of Retinal Chromophore Minimal Model by Comparison with CASPT3 Results.

    PubMed

    Grabarek, Dawid; Walczak, Elżbieta; Andruniów, Tadeusz

    2016-05-10

    The effect of the quality of the ground-state geometry on excitation energies in the retinal chromophore minimal model (PSB3) was systematically investigated using various single- (within Møller-Plesset and coupled-cluster frameworks) and multiconfigurational [within complete active space self-consistent field (CASSCF) and CASSCF-based perturbative approaches: second-order CASPT2 and third-order CASPT3] methods. Among investigated methods, only CASPT3 provides geometry in nearly perfect agreement with the CCSD(T)-based equilibrium structure. The second goal of the present study was to assess the performance of the CASPT2 methodology, which is popular in computational spectroscopy of retinals, in describing the excitation energies of low-lying excited states of PSB3 relative to CASPT3 results. The resulting CASPT2 excitation energy error is up to 0.16 eV for the S0 → S1 transition but only up to 0.06 eV for the S0 → S2 transition. Furthermore, CASPT3 excitation energies practically do not depend on modification of the zeroth-order Hamiltonian (so-called IPEA shift parameter), which does dramatically and nonsystematically affect CASPT2 excitation energies. PMID:27049438

  3. Analyses and comparison of accuracy of different genotype imputation methods.

    PubMed

    Pei, Yu-Fang; Li, Jian; Zhang, Lei; Papasian, Christopher J; Deng, Hong-Wen

    2008-01-01

    The power of genetic association analyses is often compromised by missing genotypic data which contributes to lack of significant findings, e.g., in in silico replication studies. One solution is to impute untyped SNPs from typed flanking markers, based on known linkage disequilibrium (LD) relationships. Several imputation methods are available and their usefulness in association studies has been demonstrated, but factors affecting their relative performance in accuracy have not been systematically investigated. Therefore, we investigated and compared the performance of five popular genotype imputation methods, MACH, IMPUTE, fastPHASE, PLINK and Beagle, to assess and compare the effects of factors that affect imputation accuracy rates (ARs). Our results showed that a stronger LD and a lower MAF for an untyped marker produced better ARs for all the five methods. We also observed that a greater number of haplotypes in the reference sample resulted in higher ARs for MACH, IMPUTE, PLINK and Beagle, but had little influence on the ARs for fastPHASE. In general, MACH and IMPUTE produced similar results and these two methods consistently outperformed fastPHASE, PLINK and Beagle. Our study is helpful in guiding application of imputation methods in association analyses when genotype data are missing. PMID:18958166

  4. Analyses and Comparison of Accuracy of Different Genotype Imputation Methods

    PubMed Central

    Pei, Yu-Fang; Li, Jian; Zhang, Lei; Papasian, Christopher J.; Deng, Hong-Wen

    2008-01-01

    The power of genetic association analyses is often compromised by missing genotypic data which contributes to lack of significant findings, e.g., in in silico replication studies. One solution is to impute untyped SNPs from typed flanking markers, based on known linkage disequilibrium (LD) relationships. Several imputation methods are available and their usefulness in association studies has been demonstrated, but factors affecting their relative performance in accuracy have not been systematically investigated. Therefore, we investigated and compared the performance of five popular genotype imputation methods, MACH, IMPUTE, fastPHASE, PLINK and Beagle, to assess and compare the effects of factors that affect imputation accuracy rates (ARs). Our results showed that a stronger LD and a lower MAF for an untyped marker produced better ARs for all the five methods. We also observed that a greater number of haplotypes in the reference sample resulted in higher ARs for MACH, IMPUTE, PLINK and Beagle, but had little influence on the ARs for fastPHASE. In general, MACH and IMPUTE produced similar results and these two methods consistently outperformed fastPHASE, PLINK and Beagle. Our study is helpful in guiding application of imputation methods in association analyses when genotype data are missing. PMID:18958166

  5. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-04-01

    Background and purpose - We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods - Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results - Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation - CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  6. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA)

    PubMed Central

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    Background and purpose — We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods — Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results — Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation — CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  7. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  8. Accuracy Analysis of the PIC Method

    NASA Astrophysics Data System (ADS)

    Verboncoeur, J. P.; Cartwright, K. L.

    2000-10-01

    The discretization errors for many steps of the classical Particle-in-Cell (PIC) model have been well-studied (C. K. Birdsall and A. B. Langdon, Plasma Physics via Computer Simulation, McGraw-Hill, New York, NY (1985).) (R. W. Hockney and J. W. Eastwood, Computer Simulation Using Particles, McGraw-Hill, New York, NY (1981).). In this work, the errors in the interpolation algorithms, which provide the connection between continuum particles and discrete fields, are described in greater detail. In addition, the coupling of errors between steps in the method is derived. The analysis is carried out for both electrostatic and electromagnetic PIC models, and the results are demonstrated using a bounded one-dimensional electrostatic PIC code (J. P. Verboncoeur et al., J. Comput. Phys. 104, 321-328 (1993).), as well as a bounded two-dimensional electromagnetic PIC code (J. P. Verboncoeur et al., Comp. Phys. Comm. 87, 199-211 (1995).).

  9. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  10. Accuracy of referrals for visual assessment in a stroke population

    PubMed Central

    Rowe, F J

    2011-01-01

    Purpose To evaluate accuracy of referrals from multidisciplinary stroke teams requesting visual assessments. Patients and methods Multicentre prospective study undertaken in 20 acute Trust hospitals. Stroke survivors referred with suspected visual difficulty were recruited. Standardised screening/referral and investigation forms were used to document data on referral signs and symptoms, plus type and extent of visual impairment. Results Referrals for 799 patients were reviewed: 60% men, 40% women. Mean age at onset of stroke was 69 years (SD 14: range 1–94 years). Signs recorded by referring staff were nil in 58% and positive in the remainder. Symptoms were recorded in 87%. Diagnosis of visual impairment was nil in 8% and positive in the remainder. Sensitivity of referrals (on the basis of signs detected) was calculated as 0.42 with specificity of 0.52. Kappa statistical evaluation of agreement between referral and diagnosis of visual impairment was 0.428 (SE 0.017: 95% confidence interval of −0.048, 0.019). Conclusion More than half of patient referrals were made despite no signs of visual difficulty being recorded by the referring staff. Visual impairment of varying severity was diagnosed in 92% of stroke survivors referred for visual assessment. Referrals were made based predominantly on visual symptoms and because of formal orthoptic liaison in Trusts involved. PMID:21127506

  11. Estimating Classification Consistency and Accuracy for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Gierl, Mark J.; Chang, Hua-Hua

    2012-01-01

    This article introduces procedures for the computation and asymptotic statistical inference for classification consistency and accuracy indices specifically designed for cognitive diagnostic assessments. The new classification indices can be used as important indicators of the reliability and validity of classification results produced by…

  12. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  13. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  14. Accuracy assessment in the Large Area Crop Inventory Experiment

    NASA Technical Reports Server (NTRS)

    Houston, A. G.; Pitts, D. E.; Feiveson, A. H.; Badhwar, G.; Ferguson, M.; Hsu, E.; Potter, J.; Chhikara, R.; Rader, M.; Ahlers, C.

    1979-01-01

    The Accuracy Assessment System (AAS) of the Large Area Crop Inventory Experiment (LACIE) was responsible for determining the accuracy and reliability of LACIE estimates of wheat production, area, and yield, made at regular intervals throughout the crop season, and for investigating the various LACIE error sources, quantifying these errors, and relating them to their causes. Some results of using the AAS during the three years of LACIE are reviewed. As the program culminated, AAS was able not only to meet the goal of obtaining accurate statistical estimates of sampling and classification accuracy, but also the goal of evaluating component labeling errors. Furthermore, the ground-truth data processing matured from collecting data for one crop (small grains) to collecting, quality-checking, and archiving data for all crops in a LACIE small segment.

  15. Teacher Compliance and Accuracy in State Assessment of Student Motor Skill Performance

    ERIC Educational Resources Information Center

    Hall, Tina J.; Hicklin, Lori K.; French, Karen E.

    2015-01-01

    Purpose: The purpose of this study was to investigate teacher compliance with state mandated assessment protocols and teacher accuracy in assessing student motor skill performance. Method: Middle school teachers (N = 116) submitted eighth grade student motor skill performance data from 318 physical education classes to a trained monitoring…

  16. Accuracy of age estimation of radiographic methods using developing teeth.

    PubMed

    Maber, M; Liversidge, H M; Hector, M P

    2006-05-15

    Developing teeth are used to assess maturity and estimate age in a number of disciplines, however the accuracy of different methods has not been systematically investigated. The aim of this study was to determine the accuracy of several methods. Tooth formation was assessed from radiographs of healthy children attending a dental teaching hospital. The sample was 946 children (491 boys, 455 girls, aged 3-16.99 years) with similar number of children from Bangladeshi and British Caucasian ethnic origin. Panoramic radiographs were examined and seven mandibular teeth staged according to Demirjian's dental maturity scale [A. Demirjian, Dental development, CD-ROM, Silver Platter Education, University of Montreal, Montreal, 1993-1994; A. Demirjian, H. Goldstein, J.M. Tanner, A new system of dental age assessment, Hum. Biol. 45 (1973) 211-227; A. Demirjian, H. Goldstein, New systems for dental maturity based on seven and four teeth, Ann. Hum. Biol. 3 (1976) 411-421], Nolla [C.M. Nolla, The development of the permanent teeth, J. Dent. Child. 27 (1960) 254-266] and Haavikko [K. Haavikko, The formation and the alveolar and clinical eruption of the permanent teeth. An orthopantomographic study. Proc. Finn. Dent. Soc. 66 (1970) 103-170]. Dental age was calculated for each method, including an adaptation of Demirjian's method with updated scoring [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci. 46 (2001) 893-895]. The mean difference (+/-S.D. in years) between dental and real age was calculated for each method and in the case of Haavikko, each tooth type; and tested using t-test. Mean difference was also calculated for the age group 3-13.99 years for Haavikko (mean and individual teeth). Results show that the most accurate method was by Willems [G. Willems, A. Van Olmen, B. Spiessens, C. Carels, Dental age estimation in Belgian children: Demirjian's technique revisited, J. Forensic Sci

  17. An assessment of the accuracy of orthotropic photoelasticity - Abbreviated report

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D.

    1984-01-01

    A brief overview is presented of a comprehensive study whose aim was to assess the accuracy of orthotropic photoelasticity. Particular attention is given to calibration of the material, forward testing for global and local behavior, and backward testing for stress determination. The experimentally determined stresses were found to agree with the elasticity solution. It is concluded that orthotropic photoelasticity does not appear to have the resolution of its isotropic counterpart, this being a consequence of the inherent inhomogeneity of the material.

  18. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  19. Bilingual Language Assessment: A Meta-Analysis of Diagnostic Accuracy

    ERIC Educational Resources Information Center

    Dollaghan, Christine A.; Horner, Elizabeth A.

    2011-01-01

    Purpose: To describe quality indicators for appraising studies of diagnostic accuracy and to report a meta-analysis of measures for diagnosing language impairment (LI) in bilingual Spanish-English U.S. children. Method: The authors searched electronically and by hand to locate peer-reviewed English-language publications meeting inclusion criteria;…

  20. An improved method for determining force balance calibration accuracy

    NASA Astrophysics Data System (ADS)

    Ferris, Alice T.

    The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.

  1. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  2. A Framework for the Objective Assessment of Registration Accuracy

    PubMed Central

    Simonetti, Flavio; Foroni, Roberto Israel

    2014-01-01

    Validation and accuracy assessment are the main bottlenecks preventing the adoption of image processing algorithms in the clinical practice. In the classical approach, a posteriori analysis is performed through objective metrics. In this work, a different approach based on Petri nets is proposed. The basic idea consists in predicting the accuracy of a given pipeline based on the identification and characterization of the sources of inaccuracy. The concept is demonstrated on a case study: intrasubject rigid and affine registration of magnetic resonance images. Both synthetic and real data are considered. While synthetic data allow the benchmarking of the performance with respect to the ground truth, real data enable to assess the robustness of the methodology in real contexts as well as to determine the suitability of the use of synthetic data in the training phase. Results revealed a higher correlation and a lower dispersion among the metrics for simulated data, while the opposite trend was observed for pathologic ones. Results show that the proposed model not only provides a good prediction performance but also leads to the optimization of the end-to-end chain in terms of accuracy and robustness, setting the ground for its generalization to different and more complex scenarios. PMID:24659997

  3. A rotating torus phantom for assessing color Doppler accuracy.

    PubMed

    Stewart, S F

    1999-10-01

    A rotating torus phantom was designed to assess the accuracy of color Doppler ultrasound. A thin rubber tube was filled with blood analog fluid and joined at the ends to form a torus, then mounted on a disk submerged in water and rotated at constant speeds by a motor. Flow visualization experiments and finite element analyses demonstrated that the fluid accelerates quickly to the speed of the torus and spins as a solid body. The actual fluid velocity was found to be dependent only on the motor speed and location of the sample volume. The phantom was used to assess the accuracy of Doppler-derived velocities during two-dimensional (2-D) color imaging using a commercial ultrasound system. The Doppler-derived velocities averaged 0.81 +/- 0.11 of the imposed velocity, with the variations significantly dependent on velocity, pulse-repetition frequency and wall filter frequency (p < 0.001). The torus phantom was found to have certain advantages over currently available Doppler accuracy phantoms: 1. It has a high maximum velocity; 2. it has low velocity gradients, simplifying the calibration of 2-D color Doppler; and 3. it uses a real moving fluid that gives a realistic backscatter signal. PMID:10576268

  4. Assessing the accuracy of self-reported self-talk

    PubMed Central

    Brinthaupt, Thomas M.; Benson, Scott A.; Kang, Minsoo; Moore, Zaver D.

    2015-01-01

    As with most kinds of inner experience, it is difficult to assess actual self-talk frequency beyond self-reports, given the often hidden and subjective nature of the phenomenon. The Self-Talk Scale (STS; Brinthaupt et al., 2009) is a self-report measure of self-talk frequency that has been shown to possess acceptable reliability and validity. However, no research using the STS has examined the accuracy of respondents’ self-reports. In the present paper, we report a series of studies directly examining the measurement of self-talk frequency and functions using the STS. The studies examine ways to validate self-reported self-talk by (1) comparing STS responses from 6 weeks earlier to recent experiences that might precipitate self-talk, (2) using experience sampling methods to determine whether STS scores are related to recent reports of self-talk over a period of a week, and (3) comparing self-reported STS scores to those provided by a significant other who rated the target on the STS. Results showed that (1) overall self-talk scores, particularly self-critical and self-reinforcing self-talk, were significantly related to reports of context-specific self-talk; (2) high STS scorers reported talking to themselves significantly more often during recent events compared to low STS scorers, and, contrary to expectations, (3) friends reported less agreement than strangers in their self-other self-talk ratings. Implications of the results for the validity of the STS and for measuring self-talk are presented. PMID:25999887

  5. Symmetrizable connection and combined calibration method for accuracy measurement of CMM

    NASA Astrophysics Data System (ADS)

    Fei, Yetai; Xie, Shao-Feng; Chen, Xia-Huai

    1993-09-01

    In this paper, a new method of syrmetrizab1e connection and ciribined calibration is presented based on an analysis of accuracy of the 1Mt The novel measuring principle and succinct mathematical model are described. By experimental ctparison, the correctness and practicability of the method are proved. In order to assess the M,i accuracy and conpensate for errors, all errors should be measured with high accuracy and efficiency. At the same time, a succinct mathematical model should be developed. For this reason seeking an efficient measuring method for the IM1 has all along been an important subject in this field. All currently used measuring methods for the lMv! have their limitations. To remedy such a situation, the synmetrizable connection method is presented. It solves current problems of the M1 accuracy verification.

  6. An Assessment of Citizen Contributed Ground Reference Data for Land Cover Map Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Foody, G. M.

    2015-08-01

    It is now widely accepted that an accuracy assessment should be part of a thematic mapping programme. Authoritative good or best practices for accuracy assessment have been defined but are often impractical to implement. Key reasons for this situation are linked to the ground reference data used in the accuracy assessment. Typically, it is a challenge to acquire a large sample of high quality reference cases in accordance to desired sampling designs specified as conforming to good practice and the data collected are normally to some degree imperfect limiting their value to an accuracy assessment which implicitly assumes the use of a gold standard reference. Citizen sensors have great potential to aid aspects of accuracy assessment. In particular, they may be able to act as a source of ground reference data that may, for example, reduce sample size problems but concerns with data quality remain. The relative strengths and limitations of citizen contributed data for accuracy assessment are reviewed in the context of the authoritative good practices defined for studies of land cover by remote sensing. The article will highlight some of the ways that citizen contributed data have been used in accuracy assessment as well as some of the problems that require further attention, and indicate some of the potential ways forward in the future.

  7. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  8. Effect of casting methods on accuracy of peridental restorations.

    PubMed

    Finger, W; Kota, K

    1982-06-01

    The present study has shown that the accuracy of peridental gold alloy castings depends 1) on the type of casting machine used, 2) on the diameter of the casting sprue, and 3) on the strength properties of the investment material. The dependence between the accuracy and the three factors mentioned is based on erosion of the investment mold by the inflow of the liquid casting alloy. The vacuum casting technique proved to be a more gentle casting method than centrifugal and vacuum/pressure techniques. PMID:7051263

  9. The Relationship Between Level of Training and Accuracy of Violence Risk Assessment

    PubMed Central

    Teo, Alan R.; Holley, Sarah R.; Leary, Mark; McNiel, Dale E.

    2016-01-01

    Objective Although clinical training programs aspire to develop competency in violence risk assessment, little research has examined whether level of training is associated with the accuracy of clinicians’ evaluations of violence potential. This is the first study to compare the accuracy of risk assessments by experienced psychiatrists to those of psychiatric residents. It also examined the potential of a structured decision support tool to improve residents’ violence risk assessments. Methods Using a retrospective case control design, medical records were reviewed for 151 patients who assaulted staff at a county hospital and 150 comparison patients. At admission, violence risk assessments had been completed by psychiatric residents (N= 38) for 52 patients, and by attending psychiatrists (N = 41) for 249 patients. Trained, blinded research clinicians coded information available at hospital admission with a structured risk assessment tool, the HCR-20 Clinical (HCR-20-C) scale. Results Receiver operating characteristic analyses showed that clinical estimates of violence risk by attending psychiatrists had significantly higher predictive validity than those of psychiatric residents. Risk assessments by attending psychiatrists were moderately accurate (AUC = .70), whereas risk assessments by residents were no better than chance (AUC = .52). Incremental validity analyses showed that addition of information from the HCR-20-C had the potential to improve the accuracy of risk assessments by residents to a level (AUC = .67) close to that of attending psychiatrists. Conclusions Less training and experience is associated with inaccurate violence risk assessment. Structured methods hold promise for improving training in risk assessment for violence. PMID:22948947

  10. Effects of a rater training on rating accuracy in a physical examination skills assessment

    PubMed Central

    Weitz, Gunther; Vinzentius, Christian; Twesten, Christoph; Lehnert, Hendrik; Bonnemeier, Hendrik; König, Inke R.

    2014-01-01

    Background: The accuracy and reproducibility of medical skills assessment is generally low. Rater training has little or no effect. Our knowledge in this field, however, relies on studies involving video ratings of overall clinical performances. We hypothesised that a rater training focussing on the frame of reference could improve accuracy in grading the curricular assessment of a highly standardised physical head-to-toe examination. Methods: Twenty-one raters assessed the performance of 242 third-year medical students. Eleven raters had been randomly assigned to undergo a brief frame-of-reference training a few days before the assessment. 218 encounters were successfully recorded on video and re-assessed independently by three additional observers. Accuracy was defined as the concordance between the raters' grade and the median of the observers' grade. After the assessment, both students and raters filled in a questionnaire about their views on the assessment. Results: Rater training did not have a measurable influence on accuracy. However, trained raters rated significantly more stringently than untrained raters, and their overall stringency was closer to the stringency of the observers. The questionnaire indicated a higher awareness of the halo effect in the trained raters group. Although the self-assessment of the students mirrored the assessment of the raters in both groups, the students assessed by trained raters felt more discontent with their grade. Conclusions: While training had some marginal effects, it failed to have an impact on the individual accuracy. These results in real-life encounters are consistent with previous studies on rater training using video assessments of clinical performances. The high degree of standardisation in this study was not suitable to harmonize the trained raters’ grading. The data support the notion that the process of appraising medical performance is highly individual. A frame-of-reference training as applied does not

  11. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  12. Comparative assessment of thematic accuracy of GLC maps for specific applications using existing reference data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Schouten, L.; Herold, M.

    2016-02-01

    Inputs to various applications and models, current global land cover (GLC) maps are based on different data sources and methods. Therefore, comparing GLC maps is challenging. Statistical comparison of GLC maps is further complicated by the lack of a reference dataset that is suitable for validating multiple maps. This study utilizes the existing Globcover-2005 reference dataset to compare thematic accuracies of three GLC maps for the year 2005 (Globcover, LC-CCI and MODIS). We translated and reinterpreted the LCCS (land cover classification system) classifier information of the reference dataset into the different map legends. The three maps were evaluated for a variety of applications, i.e., general circulation models, dynamic global vegetation models, agriculture assessments, carbon estimation and biodiversity assessments, using weighted accuracy assessment. Based on the impact of land cover confusions on the overall weighted accuracy of the GLC maps, we identified map improvement priorities. Overall accuracies were 70.8 ± 1.4%, 71.4 ± 1.3%, and 61.3 ± 1.5% for LC-CCI, MODIS, and Globcover, respectively. Weighted accuracy assessments produced increased overall accuracies (80-93%) since not all class confusion errors are important for specific applications. As a common denominator for all applications, the classes mixed trees, shrubs, grasses, and cropland were identified as improvement priorities. The results demonstrate the necessity of accounting for dissimilarities in the importance of map classification errors for different user application. To determine the fitness of use of GLC maps, accuracy of GLC maps should be assessed per application; there is no single-figure accuracy estimate expressing map fitness for all purposes.

  13. Accuracy Assessment of a Uav-Based Landslide Monitoring System

    NASA Astrophysics Data System (ADS)

    Peppa, M. V.; Mills, J. P.; Moore, P.; Miller, P. E.; Chambers, J. E.

    2016-06-01

    Landslides are hazardous events with often disastrous consequences. Monitoring landslides with observations of high spatio-temporal resolution can help mitigate such hazards. Mini unmanned aerial vehicles (UAVs) complemented by structure-from-motion (SfM) photogrammetry and modern per-pixel image matching algorithms can deliver a time-series of landslide elevation models in an automated and inexpensive way. This research investigates the potential of a mini UAV, equipped with a Panasonic Lumix DMC-LX5 compact camera, to provide surface deformations at acceptable levels of accuracy for landslide assessment. The study adopts a self-calibrating bundle adjustment-SfM pipeline using ground control points (GCPs). It evaluates misalignment biases and unresolved systematic errors that are transferred through the SfM process into the derived elevation models. To cross-validate the research outputs, results are compared to benchmark observations obtained by standard surveying techniques. The data is collected with 6 cm ground sample distance (GSD) and is shown to achieve planimetric and vertical accuracy of a few centimetres at independent check points (ICPs). The co-registration error of the generated elevation models is also examined in areas of stable terrain. Through this error assessment, the study estimates that the vertical sensitivity to real terrain change of the tested landslide is equal to 9 cm.

  14. Accuracy of Revised and Traditional Parallel Analyses for Assessing Dimensionality with Binary Data

    ERIC Educational Resources Information Center

    Green, Samuel B.; Redell, Nickalus; Thompson, Marilyn S.; Levy, Roy

    2016-01-01

    Parallel analysis (PA) is a useful empirical tool for assessing the number of factors in exploratory factor analysis. On conceptual and empirical grounds, we argue for a revision to PA that makes it more consistent with hypothesis testing. Using Monte Carlo methods, we evaluated the relative accuracy of the revised PA (R-PA) and traditional PA…

  15. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  16. Accuracy assessment of gridded precipitation datasets in the Himalayas

    NASA Astrophysics Data System (ADS)

    Khan, A.

    2015-12-01

    Accurate precipitation data are vital for hydro-climatic modelling and water resources assessments. Based on mass balance calculations and Turc-Budyko analysis, this study investigates the accuracy of twelve widely used precipitation gridded datasets for sub-basins in the Upper Indus Basin (UIB) in the Himalayas-Karakoram-Hindukush (HKH) region. These datasets are: 1) Global Precipitation Climatology Project (GPCP), 2) Climate Prediction Centre (CPC) Merged Analysis of Precipitation (CMAP), 3) NCEP / NCAR, 4) Global Precipitation Climatology Centre (GPCC), 5) Climatic Research Unit (CRU), 6) Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), 7) Tropical Rainfall Measuring Mission (TRMM), 8) European Reanalysis (ERA) interim data, 9) PRINCETON, 10) European Reanalysis-40 (ERA-40), 11) Willmott and Matsuura, and 12) WATCH Forcing Data based on ERA interim (WFDEI). Precipitation accuracy and consistency was assessed by physical mass balance involving sum of annual measured flow, estimated actual evapotranspiration (average of 4 datasets), estimated glacier mass balance melt contribution (average of 4 datasets), and ground water recharge (average of 3 datasets), during 1999-2010. Mass balance assessment was complemented by Turc-Budyko non-dimensional analysis, where annual precipitation, measured flow and potential evapotranspiration (average of 5 datasets) data were used for the same period. Both analyses suggest that all tested precipitation datasets significantly underestimate precipitation in the Karakoram sub-basins. For the Hindukush and Himalayan sub-basins most datasets underestimate precipitation, except ERA-interim and ERA-40. The analysis indicates that for this large region with complicated terrain features and stark spatial precipitation gradients the reanalysis datasets have better consistency with flow measurements than datasets derived from records of only sparsely distributed climatic

  17. Methods for the computation of detailed geoids and their accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.; Rummel, R.

    1975-01-01

    Two methods for the computation of geoid undulations using potential coefficients and 1 deg x 1 deg terrestrial anomaly data are examined. It was found that both methods give the same final result but that one method allows a more simplified error analysis. Specific equations were considered for the effect of the mass of the atmosphere and a cap dependent zero-order undulation term was derived. Although a correction to a gravity anomaly for the effect of the atmosphere is only about -0.87 mgal, this correction causes a fairly large undulation correction that was not considered previously. The accuracy of a geoid undulation computed by these techniques was estimated considering anomaly data errors, potential coefficient errors, and truncation (only a finite set of potential coefficients being used) errors. It was found that an optimum cap size of 20 deg should be used. The geoid and its accuracy were computed in the Geos 3 calibration area using the GEM 6 potential coefficients and 1 deg x 1 deg terrestrial anomaly data. The accuracy of the computed geoid is on the order of plus or minus 2 m with respect to an unknown set of best earth parameter constants.

  18. Accuracy assessment of contextual classification results for vegetation mapping

    NASA Astrophysics Data System (ADS)

    Thoonen, Guy; Hufkens, Koen; Borre, Jeroen Vanden; Spanhove, Toon; Scheunders, Paul

    2012-04-01

    A new procedure for quantitatively assessing the geometric accuracy of thematic maps, obtained from classifying hyperspectral remote sensing data, is presented. More specifically, the methodology is aimed at the comparison between results from any of the currently popular contextual classification strategies. The proposed procedure characterises the shapes of all objects in a classified image by defining an appropriate reference and a new quality measure. The results from the proposed procedure are represented in an intuitive way, by means of an error matrix, analogous to the confusion matrix used in traditional thematic accuracy representation. A suitable application for the methodology is vegetation mapping, where lots of closely related and spatially connected land cover types are to be distinguished. Consequently, the procedure is tested on a heathland vegetation mapping problem, related to Natura 2000 habitat monitoring. Object-based mapping and Markov Random Field classification results are compared, showing that the selected Markov Random Fields approach is more suitable for the fine-scale problem at hand, which is confirmed by the proposed procedure.

  19. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  20. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  1. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h

  2. Increasing accuracy in the assessment of motion sickness: A construct methodology

    NASA Technical Reports Server (NTRS)

    Stout, Cynthia S.; Cowings, Patricia S.

    1993-01-01

    The purpose is to introduce a new methodology that should improve the accuracy of the assessment of motion sickness. This construct methodology utilizes both subjective reports of motion sickness and objective measures of physiological correlates to assess motion sickness. Current techniques and methods used in the framework of a construct methodology are inadequate. Current assessment techniques for diagnosing motion sickness and space motion sickness are reviewed, and attention is called to the problems with the current methods. Further, principles of psychophysiology that when applied will probably resolve some of these problems are described in detail.

  3. Accuracy of scanography using storage phosphor plate systems and film for assessment of mandibular third molars

    PubMed Central

    Matzen, LH; Christensen, J; Wenzel, A

    2011-01-01

    Objectives The aim of this study was to compare the diagnostic accuracy of two digital photostimulable storage phosphor (PSP) systems and film for assessment of mandibular third molars before surgery. Methods 110 patients were referred to have both their mandibular third molars removed. Each patient underwent a radiographic examination with scanography using either Digora (Soredex, Helsinki, Finland) and film or VistaScan (Dürr Dental, Beitigheim-Bissingen, Germany) and film in a randomized paired design. Two observers examined the following variables on the scanograms: bone coverage, angulation of the tooth in the bone, number of roots, root morphology and the relationship to the mandibular canal. In 75 of the pairs (Digora/film pair = 38 and Vista/film pair = 37) both third molars were eventually removed. During and after surgery the same variables were assessed, which served as reference standard for the radiographic assessments. The Wilcoxon signed-rank test tested differences in accuracy (radiographic compared with surgical findings) between Digora/film and between Vista/film. Results There was no statistically significant difference between the diagnostic accuracy of film and either of the two digital receptors for assessment of mandibular third molars before surgery (P > 0.05), although Digora obtained a higher accuracy than film. Conclusions Scanography is a valuable method for examination of mandibular third molars before removal and the PSP digital receptors in this study were equal to film for this purpose. PMID:21697156

  4. Techniques for accuracy assessment of tree locations extracted from remotely sensed imagery.

    PubMed

    Nelson, Trisalyn; Boots, Barry; Wulder, Michael A

    2005-02-01

    Remotely sensed imagery is becoming a common source of environmental data. Consequently, there is an increasing need for tools to assess the accuracy and information content of such data. Particularly when the spatial resolution of imagery is fine, the accuracy of image processing is determined by comparisons with field data. However, the nature of error is more difficult to assess. In this paper we describe a set of tools intended for such an assessment when tree objects are extracted and field data are available for comparison. These techniques are demonstrated on individual tree locations extracted from an IKONOS image via local maximum filtering. The locations of the extracted trees are compared with field data to determine the number of found and missed trees. Aspatial and spatial (Voronoi) analysis methods are used to examine the nature of errors by searching for trends in characteristics of found and missed trees. As well, analysis is conducted to assess the information content of found trees. PMID:15644266

  5. Accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Ye, Dong; Che, Rensheng

    2009-11-01

    There is a new method that the rocket nozzle 3D motion is measured by a motion tracking system based on the passive optical markers. However, an important issue is required to resolve-how to assess the accuracy of rocket nozzle motion test. Therefore, calibration equipment is designed and manufactured for generating the truth of nozzle model motion such as translation, angle, velocity, angular velocity, etc. It consists of a base, a lifting platform, a rotary table and a rocket nozzle model with precise geometry size. The nozzle model associated with the markers is installed on the rotary table, which can translate or rotate at the known velocity. The general accuracy of rocket nozzle motion test is evaluated by comparing the truth value with the static and dynamic test data. This paper puts emphasis on accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment. By substituting measured value of the error source into error model, the pointing error reaches less than 0.005deg, rotation center position error reaches 0.08mm, and the rate stability is less than 10-3. The calibration equipment accuracy is much higher than the accuracy of nozzle motion test system, thus the former can be used to assess and calibrate the later.

  6. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  7. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients

  8. Laboratory assessment of impression accuracy by clinical simulation.

    PubMed

    Wassell, R W; Abuasi, H A

    1992-04-01

    Some laboratory tests of impression material accuracy mimic the clinical situation (simulatory) while others attempt to quantify a material's individual properties. This review concentrates on simulatory testing and aims to give a classification of the numerous tests available. Measurements can be made of the impression itself or the resulting cast. Cast measurements are divided into those made of individual dies and those made of interdie relations. Contact measurement techniques have the advantage of simplicity but are potentially inaccurate because of die abrasion. Non-contact techniques can overcome the abrasion problem but the measurements, especially those made in three dimensions, may be difficult to interpret. Nevertheless, providing that care is taken to avoid parallax error non-contact methods are preferable as experimental variables are easier to control. Where measurements are made of individual dies these should include the die width across the finishing line, as occlusal width measurements provide only limited information. A new concept of 'differential die distortion' (dimensional difference from the master model in one plane minus the dimensional difference in the perpendicular plane) provides a clinically relevant method of interpreting dimensional changes. Where measurements are made between dies movement of the individual dies within the master model must be prevented. Many of the test methods can be criticized as providing clinically unrealistic master models/dies or impression trays. Phantom head typodonts form a useful basis for the morphology of master models providing that undercuts are standardized and the master model temperature adequately controlled. PMID:1564180

  9. Accuracy of Four Tooth Size Prediction Methods on Malay Population

    PubMed Central

    Mahmoud, Belal Khaled; Abu Asab, Saifeddin Hamed I.; Taib, Haslina

    2012-01-01

    Objective. To examine the accuracy of Moyers 50%, Tanaka and Johnston, Ling and Wong and Jaroontham and Godfrey methods in predicting the mesio-distal crown width of the permanent canines and premolars (C + P1 + P2) in Malay population. Materials and Methods. The study models of 240 Malay children (120 males and 120 females) aged 14 to 18 years, all free of any signs of dental pathology or anomalies, were measured using a digital caliper accurate to 0.01 mm. The predicted widths (C + P1 + P2) in both arches derived from the tested prediction equations were compared with the actual measured widths. Results. Moyers and Tanaka and Johnston methods showed significant difference between the actual and predicted widths of (C + P1 + P2) for both sexes. Ling and Wong method also showed statistically significant difference for males, however, there was no significant difference for females. Jaroontham and Godfrey method showed statistical significant difference for females, but the male values did not show any significant difference. Conclusion. For male Malay, the method proposed by Jaroontham and Godfrey for male Thai proved to be highly accurate. For female Malay, the method proposed by Ling and Wong for southern Chinese females proved to be highly accurate. PMID:23209918

  10. Assessing the Accuracy of Landscape-Scale Phenology Products

    NASA Astrophysics Data System (ADS)

    Morisette, Jeffrey T.; Nightingale, Joanne; Nickeson, Jaime

    2010-11-01

    An International Workshop on the Validation of Satellite-Based Phenology Products; Dublin, Ireland, 18 June 2010; A 1-day international workshop on the accuracy assessment of phenology products derived from satellite observations of the land surface was held at Trinity College Dublin. This was in conjunction with the larger 4-day Phenology 2010 conference. Phenology is the study of recurring plant and animal life cycle stages (such as leafing and flowering, maturation of agricultural plants, emergence of insects, and migration of birds). The workshop brought together producers of continental- to global-scale phenology products based on satellite data, as well as providers of field observations and tower-mounted near-surface imaging sensors whose data are useful for evaluating the satellite products. The meeting was held under the auspices of the Committee on Earth Observing Satellites (CEOS) Land Product Validation (LPV) subgroup. The mission of LPV is to foster quantitative validation of high-level global land products derived from remotely sensed data and relay results that are relevant to users.

  11. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  12. Accuracy assessment of high-rate GPS measurements for seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Davis, J. L.; Ekström, G.

    2007-12-01

    Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.

  13. Method for high-accuracy multiplicity-correlation measurements

    NASA Astrophysics Data System (ADS)

    Gulbrandsen, K.; Søgaard, C.

    2016-04-01

    Multiplicity-correlation measurements provide insight into the dynamics of high-energy collisions. Models describing these collisions need these correlation measurements to tune the strengths of the underlying QCD processes which influence all observables. Detectors, however, often possess limited coverage or reduced efficiency that influence correlation measurements in obscure ways. In this paper, the effects of nonuniform detection acceptance and efficiency on the measurement of multiplicity correlations between two distinct detector regions (termed forward-backward correlations) are derived. An analysis method with such effects built in is developed and subsequently verified using different event generators. The resulting method accounts for acceptance and efficiency in a model-independent manner with high accuracy, thereby shedding light on the relative contributions of the underlying processes to particle production.

  14. Radiative accuracy assessment of CrIS upper level channels using COSMIC RO data

    NASA Astrophysics Data System (ADS)

    Qi, C.; Weng, F.; Han, Y.; Lin, L.; Chen, Y.; Wang, L.

    2012-12-01

    The Cross-track Infrared Sounder(CrIS) onboard Suomi National Polar-orbiting Partnership(NPP) satellite is designed to provide high vertical resolution information on the atmosphere's three-dimensional structure of temperature and water vapor. There are much work has been done to verify the observation accuracy of CrIS since its launch date of Oct. 28, 2011, such as SNO cross comparison with other hyper-spectral infrared instruments and forward simulation comparison using radiative transfer model based on numerical prediction background profiles. Radio occultation technique can provide profiles of the Earth's ionosphere and neutral atmosphere with high accuracy, high vertical resolution and global coverage. It has advantages of all-weather capability, low expense, long-term stability etc. Assessing CrIS radiative calibration accuracy was conducted by comparison between observation and Line-by-line simulation using COSMIC RO data. The main process technique include : (a) COSMIC RO data downloading and collocation with CrIS measurements through weighting function (wf) peak altitude dependent collocation method; (b) High spectral resolution of Line-by-line radiance simulation using collocated COSMIC RO profiles ; (c) Generation of CrIS channel radiance by FFT transform method; (d): Bias analysis . This absolute calibration accuracy assessing method verified a 0.3K around bias error of CrIS measurements.

  15. Prediction accuracy of a sample-size estimation method for ROC studies

    PubMed Central

    Chakraborty, Dev P.

    2010-01-01

    Rationale and Objectives Sample-size estimation is an important consideration when planning a receiver operating characteristic (ROC) study. The aim of this work was to assess the prediction accuracy of a sample-size estimation method using the Monte Carlo simulation method. Materials and Methods Two ROC ratings simulators characterized by low reader and high case variabilities (LH) and high reader and low case variabilities (HL) were used to generate pilot data sets in 2 modalities. Dorfman-Berbaum-Metz multiple-reader multiple-case (DBM-MRMC) analysis of the ratings yielded estimates of the modality-reader, modality-case and error variances. These were input to the Hillis-Berbaum (HB) sample-size estimation method, which predicted the number of cases needed to achieve 80% power for 10 readers and an effect size of 0.06 in the pivotal study. Predictions that generalized to readers and cases (random-all), to cases only (random-cases) and to readers only (random-readers) were generated. A prediction-accuracy index defined as the probability that any single prediction yields true power in the range 75% to 90% was used to assess the HB method. Results For random-case generalization the HB-method prediction-accuracy was reasonable, ~ 50% for 5 readers in the pilot study. Prediction-accuracy was generally higher under low reader variability conditions (LH) than under high reader variability conditions (HL). Under ideal conditions (many readers in the pilot study) the DBM-MRMC based HB method overestimated the number of cases. The overestimates could be explained by the observed large variability of the DBM-MRMC modality-reader variance estimates, particularly when reader variability was large (HL). The largest benefit of increasing the number of readers in the pilot study was realized for LH, where 15 readers were enough to yield prediction accuracy > 50% under all generalization conditions, but the benefit was lesser for HL where prediction accuracy was ~ 36% for 15

  16. An assessment of reservoir storage change accuracy from SWOT

    NASA Astrophysics Data System (ADS)

    Clark, Elizabeth; Moller, Delwyn; Lettenmaier, Dennis

    2013-04-01

    The anticipated Surface Water and Ocean Topography (SWOT) satellite mission will provide water surface height and areal extent measurements for terrestrial water bodies at an unprecedented accuracy with essentially global coverage with a 22-day repeat cycle. These measurements will provide a unique opportunity to observe storage changes in naturally occurring lakes, as well as manmade reservoirs. Given political constraints on the sharing of water information, international data bases of reservoir characteristics, such as the Global Reservoir and Dam Database, are limited to the largest reservoirs for which countries have voluntarily provided information. Impressive efforts have been made to combine currently available altimetry data with satellite-based imagery of water surface extent; however, these data sets are limited to large reservoirs located on an altimeter's flight track. SWOT's global coverage and simultaneous measurement of height and water surface extent remove, in large part, the constraint of location relative to flight path. Previous studies based on Arctic lakes suggest that SWOT will be able to provide a noisy, but meaningful, storage change signal for lakes as small as 250 m x 250 m. Here, we assess the accuracy of monthly storage change estimates over 10 reservoirs in the U.S. and consider the plausibility of estimating total storage change. Published maps of reservoir bathymetry were combined with a historical time series of daily storage to produce daily time series of maps of water surface elevation. Next, these time series were then sampled based on realistic SWOT orbital parameters and noise characteristics to create a time series of synthetic SWOT observations of water surface elevation and extent for each reservoir. We then plotted area versus elevation for the true values and for the synthetic SWOT observations. For each reservoir, a curve was fit to the synthetic SWOT observations, and its integral was used to estimate total storage

  17. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  18. Forecasting space weather: Can new econometric methods improve accuracy?

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  19. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  20. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  1. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  2. A High-accuracy Micro-deformation Measurement Method

    NASA Astrophysics Data System (ADS)

    Jiang, Li

    2016-07-01

    The requirement for ever-increasing-resolution space cameras drives focal length and diameter of optical lenses be increasing. High-frequency vibration in the process of launching and complex environmental conditions of the outer space generate micro deformation in components of space cameras. As a result, images from the space cameras are blurred. Therefore, it is necessary to measure the micro deformations in components of space cameras in various experiment conditions. This paper presents a high-accuracy micro deformation measurement method. The method is implemented as follows: (1) fix Tungsten-steel balls onto a space camera being measured and measure the coordinate for each ball under the standard condition; (2) simulate high-frequency vibrations and environmental conditions like the outer space to measure coordinates for each ball under each combination of test conditions; and (3) compute the deviation of a coordinate of a ball under a test condition combination from the coordinate of the ball under the standard condition and the deviation is the micro deformation of the space camera component associated with the ball. This method was applied to micro deformation measurement for space cameras of different models. Measurement data for these space cameras validated the proposed method.

  3. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  4. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  5. The effect of different Global Navigation Satellite System methods on positioning accuracy in elite alpine skiing.

    PubMed

    Gilgien, Matthias; Spörri, Jörg; Limpach, Philippe; Geiger, Alain; Müller, Erich

    2014-01-01

    In sport science, Global Navigation Satellite Systems (GNSS) are frequently applied to capture athletes' position, velocity and acceleration. Application of GNSS includes a large range of different GNSS technologies and methods. To date no study has comprehensively compared the different GNSS methods applied. Therefore, the aim of the current study was to investigate the effect of differential and non-differential solutions, different satellite systems and different GNSS signal frequencies on position accuracy. Twelve alpine ski racers were equipped with high-end GNSS devices while performing runs on a giant slalom course. The skiers' GNSS antenna positions were calculated in three satellite signal obstruction conditions using five different GNSS methods. The GNSS antenna positions were compared to a video-based photogrammetric reference system over one turn and against the most valid GNSS method over the entire run. Furthermore, the time for acquisitioning differential GNSS solutions was assessed for four differential methods. The only GNSS method that consistently yielded sub-decimetre position accuracy in typical alpine skiing conditions was a differential method using American (GPS) and Russian (GLONASS) satellite systems and the satellite signal frequencies L1 and L2. Under conditions of minimal satellite signal obstruction, valid results were also achieved when either the satellite system GLONASS or the frequency L2 was dropped from the best configuration. All other methods failed to fulfill the accuracy requirements needed to detect relevant differences in the kinematics of alpine skiers, even in conditions favorable for GNSS measurements. The methods with good positioning accuracy had also the shortest times to compute differential solutions. This paper highlights the importance to choose appropriate methods to meet the accuracy requirements for sport applications. PMID:25285461

  6. The Effect of Different Global Navigation Satellite System Methods on Positioning Accuracy in Elite Alpine Skiing

    PubMed Central

    Gilgien, Matthias; Spörri, Jörg; Limpach, Philippe; Geiger, Alain; Müller, Erich

    2014-01-01

    In sport science, Global Navigation Satellite Systems (GNSS) are frequently applied to capture athletes' position, velocity and acceleration. Application of GNSS includes a large range of different GNSS technologies and methods. To date no study has comprehensively compared the different GNSS methods applied. Therefore, the aim of the current study was to investigate the effect of differential and non-differential solutions, different satellite systems and different GNSS signal frequencies on position accuracy. Twelve alpine ski racers were equipped with high-end GNSS devices while performing runs on a giant slalom course. The skiers' GNSS antenna positions were calculated in three satellite signal obstruction conditions using five different GNSS methods. The GNSS antenna positions were compared to a video-based photogrammetric reference system over one turn and against the most valid GNSS method over the entire run. Furthermore, the time for acquisitioning differential GNSS solutions was assessed for four differential methods. The only GNSS method that consistently yielded sub-decimetre position accuracy in typical alpine skiing conditions was a differential method using American (GPS) and Russian (GLONASS) satellite systems and the satellite signal frequencies L1 and L2. Under conditions of minimal satellite signal obstruction, valid results were also achieved when either the satellite system GLONASS or the frequency L2 was dropped from the best configuration. All other methods failed to fulfill the accuracy requirements needed to detect relevant differences in the kinematics of alpine skiers, even in conditions favorable for GNSS measurements. The methods with good positioning accuracy had also the shortest times to compute differential solutions. This paper highlights the importance to choose appropriate methods to meet the accuracy requirements for sport applications. PMID:25285461

  7. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  8. Iii. Sleep assessment methods.

    PubMed

    Sadeh, Avi

    2015-03-01

    Sleep is a complex phenomenon that could be understood and assessed at many levels. Sleep could be described at the behavioral level (relative lack of movements and awareness and responsiveness) and at the brain level (based on EEG activity). Sleep could be characterized by its duration, by its distribution during the 24-hr day period, and by its quality (e.g., consolidated versus fragmented). Different methods have been developed to assess various aspects of sleep. This chapter covers the most established and common methods used to assess sleep in infants and children. These methods include polysomnography, videosomnography, actigraphy, direct observations, sleep diaries, and questionnaires. The advantages and disadvantages of each method are highlighted. PMID:25704734

  9. Pixels, Blocks of Pixels, and Polygons: Choosing a Spatial Unit for Thematic Accuracy Assessment

    EPA Science Inventory

    Pixels, polygons, and blocks of pixels are all potentially viable spatial assessment units for conducting an accuracy assessment. We develop a statistical population-based framework to examine how the spatial unit chosen affects the outcome of an accuracy assessment. The populati...

  10. Does it Make a Difference? Investigating the Assessment Accuracy of Teacher Tutors and Student Tutors

    ERIC Educational Resources Information Center

    Herppich, Stephanie; Wittwer, Jorg; Nuckles, Matthias; Renkl, Alexander

    2013-01-01

    Tutors often have difficulty with accurately assessing a tutee's understanding. However, little is known about whether the professional expertise of tutors influences their assessment accuracy. In this study, the authors examined the accuracy with which 21 teacher tutors and 25 student tutors assessed a tutee's understanding of the human…

  11. New High-Accuracy Methods for Automatically Detecting & Tracking CMEs

    NASA Astrophysics Data System (ADS)

    Byrne, Jason; Morgan, H.; Habbal, S. R.

    2012-05-01

    With the large amounts of CME image data available from the SOHO and STEREO coronagraphs, manual cataloguing of events can be tedious and subject to user bias. Therefore automated catalogues, such as CACTus and SEEDS, have been developed in an effort to produce a robust method of detection and analysis of events. Here we present the development of a new CORIMP (coronal image processing) CME detection and tracking technique that overcomes many of the drawbacks of previous methods. It works by first employing a dynamic CME separation technique to remove the static background, and then characterizing CMEs via a multiscale edge-detection algorithm. This allows the inherent structure of the CMEs to be revealed in each image, which is usually prone to spatiotemporal crosstalk as a result of traditional image-differencing techniques. Thus the kinematic and morphological information on each event is resolved with higher accuracy than previous catalogues, revealing CME acceleration and expansion profiles otherwise undetected, and enabling a determination of the varying speeds attained across the span of the CME. The potential for a 3D characterization of the internal structure of CMEs is also demonstrated.

  12. Comparative Accuracy Assessment of Global Land Cover Datasets Using Existing Reference Data

    NASA Astrophysics Data System (ADS)

    Tsendbazar, N. E.; de Bruin, S.; Mora, B.; Herold, M.

    2014-12-01

    Land cover is a key variable to monitor the impact of human and natural processes on the biosphere. As one of the Essential Climate Variables, land cover observations are used for climate models and several other applications. Remote sensing technologies have enabled the generation of several global land cover (GLC) products that are based on different data sources and methods (e.g. legends). Moreover, the reported map accuracies result from varying validation strategies. Such differences make the comparison of the GLC products challenging and create confusion on selecting suitable datasets for different applications. This study aims to conduct comparative accuracy assessment of GLC datasets (LC-CCI 2005, MODIS 2005, and Globcover 2005) using the Globcover 2005 reference data which can represent the thematic differences of these GLC maps. This GLC reference dataset provides LCCS classifier information for 3 main land cover types for each sample plot. The LCCS classifier information was translated according to the legends of the GLC maps analysed. The preliminary analysis showed some challenges in LCCS classifier translation arising from missing important classifier information, differences in class definition between the legends and absence of class proportion of main land cover types. To overcome these issues, we consolidated the entire reference data (i.e. 3857 samples distributed at global scale). Then the GLC maps and the reference dataset were harmonized into 13 general classes to perform the comparative accuracy assessments. To help users on selecting suitable GLC dataset(s) for their application, we conducted the map accuracy assessments considering different users' perspectives: climate modelling, bio-diversity assessments, agriculture monitoring, and map producers. This communication will present the method and the results of this study and provide a set of recommendations to the GLC map producers and users with the aim to facilitate the use of GLC maps.

  13. Accuracy of virtual models in the assessment of maxillary defects

    PubMed Central

    Kurşun, Şebnem; Kılıç, Cenk; Özen, Tuncer

    2015-01-01

    Purpose This study aimed to assess the reliability of measurements performed on three-dimensional (3D) virtual models of maxillary defects obtained using cone-beam computed tomography (CBCT) and 3D optical scanning. Materials and Methods Mechanical cavities simulating maxillary defects were prepared on the hard palate of nine cadavers. Images were obtained using a CBCT unit at three different fields-of-views (FOVs) and voxel sizes: 1) 60×60 mm FOV, 0.125 mm3 (FOV60); 2) 80×80 mm FOV, 0.160 mm3 (FOV80); and 3) 100×100 mm FOV, 0.250 mm3 (FOV100). Superimposition of the images was performed using software called VRMesh Design. Automated volume measurements were conducted, and differences between surfaces were demonstrated. Silicon impressions obtained from the defects were also scanned with a 3D optical scanner. Virtual models obtained using VRMesh Design were compared with impressions obtained by scanning silicon models. Gold standard volumes of the impression models were then compared with CBCT and 3D scanner measurements. Further, the general linear model was used, and the significance was set to p=0.05. Results A comparison of the results obtained by the observers and methods revealed the p values to be smaller than 0.05, suggesting that the measurement variations were caused by both methods and observers along with the different cadaver specimens used. Further, the 3D scanner measurements were closer to the gold standard measurements when compared to the CBCT measurements. Conclusion In the assessment of artificially created maxillary defects, the 3D scanner measurements were more accurate than the CBCT measurements. PMID:25793180

  14. Accuracy of Wind Prediction Methods in the California Sea Breeze

    NASA Astrophysics Data System (ADS)

    Sumers, B. D.; Dvorak, M. J.; Ten Hoeve, J. E.; Jacobson, M. Z.

    2010-12-01

    In this study, we investigate the accuracy of measure-correlate-predict (MCP) algorithms and log law/power law scaling using data from two tall towers in coastal environments. We find that MCP algorithms accurately predict sea breeze winds and that log law/power law scaling methods struggle to predict 50-meter wind speeds. MCP methods have received significant attention as the wind industry has grown and the ability to accurately characterize the wind resource has become valuable. These methods are used to produce longer-term wind speed records from short-term measurement campaigns. A correlation is developed between the “target site,” where the developer is interested in building wind turbines, and a “reference site,” where long-term wind data is available. Up to twenty years of prior wind speeds are then are predicted. In this study, two existing MCP methods - linear regression and Mortimer’s method - are applied to predict 50-meter wind speeds at sites in the Salinas Valley and Redwood City, CA. The predictions are then verified with tall tower data. It is found that linear regression is poorly suited to MCP applications as the process produces inaccurate estimates of the cube of the wind speed at 50 meters. Meanwhile, Mortimer’s method, which bins data by direction and speed, is found to accurately predict the cube of the wind speed in both sea breeze and non-sea breeze conditions. We also find that log and power law are unstable predictors of wind speeds. While these methods produced accurate estimates of the average 50-meter wind speed at both sites, they predicted an average cube of the wind speed that was between 1.3 and 1.18 times the observed value. Inspection of time-series error reveals increased error in the mid-afternoon of the summer. This suggests that the cold sea breeze may disrupt the vertical temperature profile, create a stable atmosphere and violate the assumptions that allow log law scaling to work.

  15. Constraint on Absolute Accuracy of Metacomprehension Assessments: The Anchoring and Adjustment Model vs. the Standards Model

    ERIC Educational Resources Information Center

    Kwon, Heekyung

    2011-01-01

    The objective of this study is to provide a systematic account of three typical phenomena surrounding absolute accuracy of metacomprehension assessments: (1) the absolute accuracy of predictions is typically quite low; (2) there exist individual differences in absolute accuracy of predictions as a function of reading skill; and (3) postdictions…

  16. ASSESSING THE ACCURACY OF NATIONAL LAND COVER DATASET AREA ESTIMATES AT MULTIPLE SPATIAL EXTENTS

    EPA Science Inventory

    Site specific accuracy assessments provide fine-scale evaluation of the thematic accuracy of land use/land cover (LULC) datasets; however, they provide little insight into LULC accuracy across varying spatial extents. Additionally, LULC data are typically used to describe lands...

  17. Accuracy Verification of Gps-Ins Method in Indonesia

    NASA Astrophysics Data System (ADS)

    Mulyana, A. K.; Rizaldy, A.; Uesugi, K.

    2012-07-01

    Pasco Corporation (Japan) has been implementing a project in Indonesia for Sumatra Island which is named Data Acquisition and Production on the National Geo-Spatial Data Infrastructure (NSDI) Development. Digital aerial images in 25 cm GSD for 1:10,000 scale mapping have been taken as a part of the project. The owner of the project, The National Coordinating Agency for Surveys and Mapping (Bakosurtanal) planned to apply conventional aerial triangulation method as the initial stage. Pasco recommended Direct Geo-Reference Methodology by using GPS-IMU measurements and carried out a verification work in a city area. Measurements of tie points were implemented by using KLT/ATLAS software manually and adjusted by BINGO software. Aerial triangulation accuracy verifications were done by using one height control in the block center, one GCP in the center and four GCPs at the corners and one in the center. The results are sequentially, rms X,Y = 0.410 cm, rms Z = 0.394 cm (one height control point), rms X,Y = 0.430 cm, rms Z = 0.392 cm (one GCP) and rms X,Y = 0.356 cm, rms Z = 0.395 cm (5 GCPs). 5 GCPs for each block in official applications have been preferred for safety reasons. Comparisons of direct geo-referencing results with geodetic check points and aerial triangulation block adjustments have been done. The details of the work have been given in this study.

  18. Assessing accuracy in citizen science-based plant phenology monitoring

    NASA Astrophysics Data System (ADS)

    Fuccillo, Kerissa K.; Crimmins, Theresa M.; de Rivera, Catherine E.; Elder, Timothy S.

    2015-07-01

    In the USA, thousands of volunteers are engaged in tracking plant and animal phenology through a variety of citizen science programs for the purpose of amassing spatially and temporally comprehensive datasets useful to scientists and resource managers. The quality of these observations and their suitability for scientific analysis, however, remains largely unevaluated. We aimed to evaluate the accuracy of plant phenology observations collected by citizen scientist volunteers following protocols designed by the USA National Phenology Network (USA-NPN). Phenology observations made by volunteers receiving several hours of formal training were compared to those collected independently by a professional ecologist. Approximately 11,000 observations were recorded by 28 volunteers over the course of one field season. Volunteers consistently identified phenophases correctly (91 % overall) for the 19 species observed. Volunteers demonstrated greatest overall accuracy identifying unfolded leaves, ripe fruits, and open flowers. Transitional accuracy decreased for some species/phenophase combinations (70 % average), and accuracy varied significantly by phenophase and species ( p < 0.0001). Volunteers who submitted fewer observations over the period of study did not exhibit a higher error rate than those who submitted more total observations. Overall, these results suggest that volunteers with limited training can provide reliable observations when following explicit, standardized protocols. Future studies should investigate different observation models (i.e., group/individual, online/in-person training) over subsequent seasons with multiple expert comparisons to further substantiate the ability of these monitoring programs to supply accurate broadscale datasets capable of answering pressing ecological questions about global change.

  19. Assessment of Relative Accuracy of AHN-2 Laser Scanning Data Using Planar Features

    PubMed Central

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  20. Assessment of relative accuracy of AHN-2 laser scanning data using planar features.

    PubMed

    van der Sande, Corné; Soudarissanane, Sylvie; Khoshelham, Kourosh

    2010-01-01

    AHN-2 is the second part of the Actueel Hoogtebestand Nederland project, which concerns the acquisition of high-resolution altimetry data over the entire Netherlands using airborne laser scanning. The accuracy assessment of laser altimetry data usually relies on comparing corresponding tie elements, often points or lines, in the overlapping strips. This paper proposes a new approach to strip adjustment and accuracy assessment of AHN-2 data by using planar features. In the proposed approach a transformation is estimated between two overlapping strips by minimizing the distances between points in one strip and their corresponding planes in the other. The planes and the corresponding points are extracted in an automated segmentation process. The point-to-plane distances are used as observables in an estimation model, whereby the parameters of a transformation between the two strips and their associated quality measures are estimated. We demonstrate the performance of the method for the accuracy assessment of the AHN-2 dataset over Zeeland province of The Netherlands. The results show vertical offsets of up to 4 cm between the overlapping strips, and horizontal offsets ranging from 2 cm to 34 cm. PMID:22163650

  1. Assessing the relative accuracies of two screening tests in the presence of verification bias.

    PubMed

    Zhou, X H; Higgs, R E

    Epidemiological studies of dementia often use two-stage designs because of the relatively low prevalence of the disease and the high cost of ascertaining a diagnosis. The first stage of a two-stage design assesses a large sample with a screening instrument. Then, the subjects are grouped according to their performance on the screening instrument, such as poor, intermediate and good performers. The second stage involves a more extensive diagnostic procedure, such as a clinical assessment, for a particular subset of the study sample selected from each of these groups. However, not all selected subjects have the clinical diagnosis because some subjects may refuse and others are unable to be clinically assessed. Thus, some subjects screened do not have a clinical diagnosis. Furthermore, whether a subject has a clinical diagnosis depends not only on the screening test result but also on other factors, and the sampling fractions for the diagnosis are unknown and have to be estimated. One of the goals in these studies is to assess the relative accuracies of two screening tests. Any analysis using only verified cases may result in verification bias. In this paper, we propose the use of two bootstrap methods to construct confidence intervals for the difference in the accuracies of two screening tests in the presence of verification bias. We illustrate the application of the proposed methods to a simulated data set from a real two-stage study of dementia that has motivated this research. PMID:10844728

  2. Combining accuracy assessment of land-cover maps with environmental monitoring programs

    USGS Publications Warehouse

    Stehman, S.V.; Czaplewski, R.L.; Nusser, S.M.; Yang, L.; Zhu, Z.

    2000-01-01

    A scientifically valid accuracy assessment of a large-area, land-cover map is expensive. Environmental monitoring programs offer a potential source of data to partially defray the cost of accuracy assessment while still maintaining the statistical validity. In this article, three general strategies for combining accuracy assessment and environmental monitoring protocols are described. These strategies range from a fully integrated accuracy assessment and environmental monitoring protocol, to one in which the protocols operate nearly independently. For all three strategies, features critical to using monitoring data for accuracy assessment include compatibility of the land-cover classification schemes, precisely co-registered sample data, and spatial and temporal compatibility of the map and reference data. Two monitoring programs, the National Resources Inventory (NRI) and the Forest Inventory and Monitoring (FIM), are used to illustrate important features for implementing a combined protocol.

  3. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  4. Accuracy assessment of CKC high-density surface EMG decomposition in biceps femoris muscle

    NASA Astrophysics Data System (ADS)

    Marateb, H. R.; McGill, K. C.; Holobar, A.; Lateva, Z. C.; Mansourian, M.; Merletti, R.

    2011-10-01

    The aim of this study was to assess the accuracy of the convolution kernel compensation (CKC) method in decomposing high-density surface EMG (HDsEMG) signals from the pennate biceps femoris long-head muscle. Although the CKC method has already been thoroughly assessed in parallel-fibered muscles, there are several factors that could hinder its performance in pennate muscles. Namely, HDsEMG signals from pennate and parallel-fibered muscles differ considerably in terms of the number of detectable motor units (MUs) and the spatial distribution of the motor-unit action potentials (MUAPs). In this study, monopolar surface EMG signals were recorded from five normal subjects during low-force voluntary isometric contractions using a 92-channel electrode grid with 8 mm inter-electrode distances. Intramuscular EMG (iEMG) signals were recorded concurrently using monopolar needles. The HDsEMG and iEMG signals were independently decomposed into MUAP trains, and the iEMG results were verified using a rigorous a posteriori statistical analysis. HDsEMG decomposition identified from 2 to 30 MUAP trains per contraction. 3 ± 2 of these trains were also reliably detected by iEMG decomposition. The measured CKC decomposition accuracy of these common trains over a selected 10 s interval was 91.5 ± 5.8%. The other trains were not assessed. The significant factors that affected CKC decomposition accuracy were the number of HDsEMG channels that were free of technical artifact and the distinguishability of the MUAPs in the HDsEMG signal (P < 0.05). These results show that the CKC method reliably identifies at least a subset of MUAP trains in HDsEMG signals from low force contractions in pennate muscles.

  5. Accuracy of audio computer-assisted self-interviewing (ACASI) and self-administered questionnaires for the assessment of sexual behavior.

    PubMed

    Morrison-Beedy, Dianne; Carey, Michael P; Tu, Xin

    2006-09-01

    This study examined the accuracy of two retrospective methods and assessment intervals for recall of sexual behavior and assessed predictors of recall accuracy. Using a 2 [mode: audio-computer assisted self-interview (ACASI) vs. self-administered questionnaire (SAQ)] by 2 (frequency: monthly vs. quarterly) design, young women (N =102) were randomly assigned to one of four conditions. Participants completed baseline measures, monitored their behavior with a daily diary, and returned monthly (or quarterly) for assessments. A mixed pattern of accuracy between the four assessment methods was identified. Monthly assessments yielded more accurate recall for protected and unprotected vaginal sex but quarterly assessments yielded more accurate recall for unprotected oral sex. Mode differences were not strong, and hypothesized predictors of accuracy tended not to be associated with recall accuracy. Choice of assessment mode and frequency should be based upon the research question(s), population, resources, and context in which data collection will occur. PMID:16721506

  6. Assessing expected accuracy of probe vehicle travel time reports

    SciTech Connect

    Hellinga, B.; Fu, L.

    1999-12-01

    The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has proved contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence of absence of sample bias.

  7. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  8. Genomic selection in forage breeding: accuracy and methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The main benefits expected from genomic selection in forage grasses and legumes are to increase selection accuracy, reduce evaluation costs per genotype, and reduce cycle time. Aiming at designing a training population and first generations of selection, deterministic equations were used to compare ...

  9. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-01-01

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  10. Bayesian reclassification statistics for assessing improvements in diagnostic accuracy.

    PubMed

    Huang, Zhipeng; Li, Jialiang; Cheng, Ching-Yu; Cheung, Carol; Wong, Tien-Yin

    2016-07-10

    We propose a Bayesian approach to the estimation of the net reclassification improvement (NRI) and three versions of the integrated discrimination improvement (IDI) under the logistic regression model. Both NRI and IDI were proposed as numerical characterizations of accuracy improvement for diagnostic tests and were shown to retain certain practical advantage over analysis based on ROC curves and offer complementary information to the changes in area under the curve. Our development is a new contribution towards Bayesian solution for the estimation of NRI and IDI, which eases computational burden and increases flexibility. Our simulation results indicate that Bayesian estimation enjoys satisfactory performance comparable with frequentist estimation and achieves point estimation and credible interval construction simultaneously. We adopt the methodology to analyze a real data from the Singapore Malay Eye Study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26875442

  11. LANDSAT Scene-to-scene Registration Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Anderson, J. E.

    1984-01-01

    Initial results obtained from the registration of LANDSAT-4 data to LANDSAT-2 MSS data are documented and compared with results obtained from a LANDSAT-2 MSS-to-LANDSAT-2 scene-to-scene registration (using the same LANDSAT-2 MSS data as the base data set in both procedures). RMS errors calculated on the control points used in the establishment of scene-to-scene mapping equations are compared to error computed from independently chosen verification points. Models developed to estimate actual scene-to-scene registration accuracy based on the use of electrostatic plots are also presented. Analysis of results indicates a statistically significant difference in the RMS errors for the element contribution. Scan line errors were not significantly different. It appears that a modification to the LANDSAT-4 MSS scan mirror coefficients is required to correct the situation.

  12. Accuracy evaluation of a new stereophotogrammetry-based functional method for joint kinematic analysis in biomechanics.

    PubMed

    Galetto, Maurizio; Gastaldi, Laura; Lisco, Giulia; Mastrogiacomo, Luca; Pastorelli, Stefano

    2014-11-01

    The human joint kinematics is an interesting topic in biomechanics and turns to be useful for the analysis of human movement in several fields. A crucial issue regards the assessment of joint parameters, like axes and centers of rotation, due to the direct influence on human motion patterns. A proper accuracy in the estimation of these parameters is hence required. On the whole, stereophotogrammetry-based predictive methods and, as an alternative, functional ones can be used to this end. This article presents a new functional algorithm for the assessment of knee joint parameters, based on a polycentric hinge model for the knee flexion-extension. The proposed algorithm is discussed, identifying its fields of application and its limits. The techniques for estimating the joint parameters from the metrological point of view are analyzed, so as to lay the groundwork for enhancing and eventually replacing predictive methods, currently used in the laboratories of human movement analysis. This article also presents an assessment of the accuracy associated with the whole process of measurement and joint parameters estimation. To this end, the presented functional method is tested through both computer simulations and a series of experimental laboratory tests in which swing motions were imposed to a polycentric mechanical analogue and a stereophotogrammetric system was used to record them. PMID:25500863

  13. Accuracy of Four Dental Age Estimation Methods in Southern Indian Children

    PubMed Central

    Sanghvi, Praveen; Perumalla, Kiran Kumar; Srinivasaraju, D.; Srinivas, Jami; Kalyan, U. Siva; Rasool, SK. Md. Iftekhar

    2015-01-01

    Introduction: For various forensic investigations of both living and dead individuals, the knowledge of the actual age or date of birth of the subject is of utmost importance. In recent years, age estimation has gained importance for a variety of reasons, including identifying criminal and legal responsibility, and for many other social events such as birth certificate, marriage, beginning a job, joining the army and retirement. Developing teeth are used to assess maturity and estimate age in number of disciplines; however the accuracy of different methods has not been assessed systematically. The aim of this study was to determine the accuracy of four dental age estimation methods. Materials and Methods: Digital Orthopantomographs (OPGS) of South Indian children between the ages of 6 and 16 y who visited the department of Department of Oral medicine and Radiology of GITAM Dental College, Visakhapatnam, Andhra Pradesh, India with similar ethnic origin were assessed. Dental age was calculated using Demirjian, Willems, Nolla, and adopted Haavikko methods and the difference between estimated dental age and chronological age were compared with paired t-test and Wilcoxon signed rank test. Results: An overestimation of the dental age was observed by using Demirjian and Nolla methods (0.1±1.63, 0.47±0.83 years in total sample respectively) and an underestimation of dental age was observed by using Willems and Haavikko methods (-0.4±1.53, -2.9±1.41 years respectively in total sample). Conclusion: Nolla’s method was more accurate in estimating dental age compared to other methods. Moreover, all the four methods were found to be reliable in estimating age of individuals of unknown chronological age in South Indian children. PMID:25738008

  14. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, Timothy K.; Chrostowski, Jon D.

    1991-01-01

    Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.

  15. Inertial Measures of Motion for Clinical Biomechanics: Comparative Assessment of Accuracy under Controlled Conditions - Effect of Velocity

    PubMed Central

    Lebel, Karina; Boissy, Patrick; Hamel, Mathieu; Duval, Christian

    2013-01-01

    Background Inertial measurement of motion with Attitude and Heading Reference Systems (AHRS) is emerging as an alternative to 3D motion capture systems in biomechanics. The objectives of this study are: 1) to describe the absolute and relative accuracy of multiple units of commercially available AHRS under various types of motion; and 2) to evaluate the effect of motion velocity on the accuracy of these measurements. Methods The criterion validity of accuracy was established under controlled conditions using an instrumented Gimbal table. AHRS modules were carefully attached to the center plate of the Gimbal table and put through experimental static and dynamic conditions. Static and absolute accuracy was assessed by comparing the AHRS orientation measurement to those obtained using an optical gold standard. Relative accuracy was assessed by measuring the variation in relative orientation between modules during trials. Findings Evaluated AHRS systems demonstrated good absolute static accuracy (mean error < 0.5o) and clinically acceptable absolute accuracy under condition of slow motions (mean error between 0.5o and 3.1o). In slow motions, relative accuracy varied from 2o to 7o depending on the type of AHRS and the type of rotation. Absolute and relative accuracy were significantly affected (p<0.05) by velocity during sustained motions. The extent of that effect varied across AHRS. Interpretation Absolute and relative accuracy of AHRS are affected by environmental magnetic perturbations and conditions of motions. Relative accuracy of AHRS is mostly affected by the ability of all modules to locate the same global reference coordinate system at all time. Conclusions Existing AHRS systems can be considered for use in clinical biomechanics under constrained conditions of use. While their individual capacity to track absolute motion is relatively consistent, the use of multiple AHRS modules to compute relative motion between rigid bodies needs to be optimized according to

  16. Vestibular and Oculomotor Assessments May Increase Accuracy of Subacute Concussion Assessment.

    PubMed

    McDevitt, J; Appiah-Kubi, K O; Tierney, R; Wright, W G

    2016-08-01

    In this study, we collected and analyzed preliminary data for the internal consistency of a new condensed model to assess vestibular and oculomotor impairments following a concussion. We also examined this model's ability to discriminate concussed athletes from healthy controls. Each participant was tested in a concussion assessment protocol that consisted of the Neurocom's Sensory Organization Test (SOT), Balance Error Scoring System exam, and a series of 8 vestibular and oculomotor assessments. Of these 10 assessments, only the SOT, near point convergence, and the signs and symptoms (S/S) scores collected following optokinetic stimulation, the horizontal eye saccades test, and the gaze stabilization test were significantly correlated with health status, and were used in further analyses. Multivariate logistic regression for binary outcomes was employed and these beta weights were used to calculate the area under the receiver operating characteristic curve ( area under the curve). The best model supported by our findings suggest that an exam consisting of the 4 SOT sensory ratios, near point convergence, and the optokinetic stimulation signs and symptoms score are sensitive in discriminating concussed athletes from healthy controls (accuracy=98.6%, AUC=0.983). However, an even more parsimonious model consisting of only the optokinetic stimulation and gaze stabilization test S/S scores and near point convergence was found to be a sensitive model for discriminating concussed athletes from healthy controls (accuracy=94.4%, AUC=0.951) without the need for expensive equipment. Although more investigation is needed, these findings will be helpful to health professionals potentially providing them with a sensitive and specific battery of simple vestibular and oculomotor assessments for concussion management. PMID:27176886

  17. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  18. Evaluation of TDRSS-user orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.

    1991-01-01

    The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.

  19. An assessment of the accuracy of orthotropic photoelasticity

    NASA Technical Reports Server (NTRS)

    Hyer, M. W.; Liu, D. H.

    1984-01-01

    The accuracy of orthotropic photoelasticity was studied. The study consisted of both theoretical and experimental phases. In the theoretical phase a stress-optic law was developed. The stress-optic law included the effects of residual birefringence in the relation between applied stress and the material's optical response. The experimental phase had several portions. First, it was shown that four-point bending tests and the concept of an optical neutral axis could be conveniently used to calibrate the stress-optic behavior of the material. Second, the actual optical response of an orthotropic disk in diametral compression was compared with theoretical predictions. Third, the stresses in the disk were determined from the observed optical response, the stress-optic law, and a finite-difference form of the plane stress equilibrium equations. It was concluded that orthotropic photoelasticity is not as accurate as isotropic photoelasticity. This is believed to be due to the lack of good fringe resolution and the low sensitivity of most orthotropic photoelastic materials.

  20. Rectal cancer staging: Multidetector-row computed tomography diagnostic accuracy in assessment of mesorectal fascia invasion

    PubMed Central

    Ippolito, Davide; Drago, Silvia Girolama; Franzesi, Cammillo Talei; Fior, Davide; Sironi, Sandro

    2016-01-01

    AIM: To assess the diagnostic accuracy of multidetector-row computed tomography (MDCT) as compared with conventional magnetic resonance imaging (MRI), in identifying mesorectal fascia (MRF) invasion in rectal cancer patients. METHODS: Ninety-one patients with biopsy proven rectal adenocarcinoma referred for thoracic and abdominal CT staging were enrolled in this study. The contrast-enhanced MDCT scans were performed on a 256 row scanner (ICT, Philips) with the following acquisition parameters: tube voltage 120 KV, tube current 150-300 mAs. Imaging data were reviewed as axial and as multiplanar reconstructions (MPRs) images along the rectal tumor axis. MRI study, performed on 1.5 T with dedicated phased array multicoil, included multiplanar T2 and axial T1 sequences and diffusion weighted images (DWI). Axial and MPR CT images independently were compared to MRI and MRF involvement was determined. Diagnostic accuracy of both modalities was compared and statistically analyzed. RESULTS: According to MRI, the MRF was involved in 51 patients and not involved in 40 patients. DWI allowed to recognize the tumor as a focal mass with high signal intensity on high b-value images, compared with the signal of the normal adjacent rectal wall or with the lower tissue signal intensity background. The number of patients correctly staged by the native axial CT images was 71 out of 91 (41 with involved MRF; 30 with not involved MRF), while by using the MPR 80 patients were correctly staged (45 with involved MRF; 35 with not involved MRF). Local tumor staging suggested by MDCT agreed with those of MRI, obtaining for CT axial images sensitivity and specificity of 80.4% and 75%, positive predictive value (PPV) 80.4%, negative predictive value (NPV) 75% and accuracy 78%; while performing MPR the sensitivity and specificity increased to 88% and 87.5%, PPV was 90%, NPV 85.36% and accuracy 88%. MPR images showed higher diagnostic accuracy, in terms of MRF involvement, than native axial images

  1. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  2. Phase error compensation methods for high-accuracy profile measurement

    NASA Astrophysics Data System (ADS)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Zhang, Zonghua; Jiang, Hao; Yin, Yongkai; Huang, Shujun

    2016-04-01

    In a phase-shifting algorithm-based fringe projection profilometry, the nonlinear intensity response, called the gamma effect, of the projector-camera setup is a major source of error in phase retrieval. This paper proposes two novel, accurate approaches to realize both active and passive phase error compensation based on a universal phase error model which is suitable for a arbitrary phase-shifting step. The experimental results on phase error compensation and profile measurement of standard components verified the validity and accuracy of the two proposed approaches which are robust when faced with changeable measurement conditions.

  3. Mapping with Small UAS: A Point Cloud Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Toth, Charles; Jozkow, Grzegorz; Grejner-Brzezinska, Dorota

    2015-12-01

    Interest in using inexpensive Unmanned Aerial System (UAS) technology for topographic mapping has recently significantly increased. Small UAS platforms equipped with consumer grade cameras can easily acquire high-resolution aerial imagery allowing for dense point cloud generation, followed by surface model creation and orthophoto production. In contrast to conventional airborne mapping systems, UAS has limited ground coverage due to low flying height and limited flying time, yet it offers an attractive alternative to high performance airborne systems, as the cost of the sensors and platform, and the flight logistics, is relatively low. In addition, UAS is better suited for small area data acquisitions and to acquire data in difficult to access areas, such as urban canyons or densely built-up environments. The main question with respect to the use of UAS is whether the inexpensive consumer sensors installed in UAS platforms can provide the geospatial data quality comparable to that provided by conventional systems. This study aims at the performance evaluation of the current practice of UAS-based topographic mapping by reviewing the practical aspects of sensor configuration, georeferencing and point cloud generation, including comparisons between sensor types and processing tools. The main objective is to provide accuracy characterization and practical information for selecting and using UAS solutions in general mapping applications. The analysis is based on statistical evaluation as well as visual examination of experimental data acquired by a Bergen octocopter with three different image sensor configurations, including a GoPro HERO3+ Black Edition, a Nikon D800 DSLR and a Velodyne HDL-32. In addition, georeferencing data of varying quality were acquired and evaluated. The optical imagery was processed by using three commercial point cloud generation tools. Comparing point clouds created by active and passive sensors by using different quality sensors, and finally

  4. Classification Consistency and Accuracy for Complex Assessments under the Compound Multinomial Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Wan, Lei

    2009-01-01

    For a test that consists of dichotomously scored items, several approaches have been reported in the literature for estimating classification consistency and accuracy indices based on a single administration of a test. Classification consistency and accuracy have not been studied much, however, for "complex" assessments--for example, those that…

  5. Attribute-Level and Pattern-Level Classification Consistency and Accuracy Indices for Cognitive Diagnostic Assessment

    ERIC Educational Resources Information Center

    Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang

    2015-01-01

    Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…

  6. The objective assessment of cough frequency: accuracy of the LR102 device

    PubMed Central

    2011-01-01

    Background The measurement of cough frequency is problematic and most often based on subjective assessment. The aim of the study was to assess the accuracy of the automatic identification of cough episodes by LR102, a cough frequency meter based on electromyography and audio sensors. Methods Ten adult patients complaining of cough were recruited in primary care and hospital settings. Participants were asked to wear LR102 for 4 consecutive hours during which they were also filmed. Results Measures of cough frequency by LR102 and manual counting were closely correlated (r = 0.87 for number of cough episodes per hour; r = 0.89 for number of single coughs per hour) but LR102 overestimated cough frequency. Bland-Altman plots indicate that differences between the two measurements were not influenced by cough frequency. Conclusions LR102 offers a useful estimate of cough frequency in adults in their own environment, while significantly reducing the time required for analysis. PMID:22132691

  7. Method of questioning and the accuracy of eyewitness testimony.

    PubMed

    Venter, A; Louw, D A

    2005-03-01

    System variables are integrally part of factors that can be controlled by the legal system to enhance the accuracy of testimony by eyewitnesses. Apart from examining the relationship between questioning as system variable and the accuracy of testimony, the present study furthermore explores the relationship between type of questioning and certain biographical variables (occupation, age, gender and race). To achieve the aim of the study, 412 respondents consisting of 11 to 14-year-olds, university students, the public and Police College students participated and were exposed to open-ended or closed-ended questions. It was found that the participants who responded to the closed-ended questions were significantly more accurate than those who answered the open-ended questions. All the biographical groups, except the public, were more accurate in responding to the closed-ended questions. The scholars obtained the lowest scores (although not always significant) for both the open-ended and closed-ended questions. With respect to age the 18 to 25-year-olds obtained significantly higher scores than the other groups for the closed-ended questions. Whites performed significantly better than blacks in response to the open-ended and closed-ended questions. PMID:15887614

  8. Does DFT-SAPT method provide spectroscopic accuracy?

    SciTech Connect

    Shirkov, Leonid; Makarewicz, Jan

    2015-02-14

    Ground state potential energy curves for homonuclear and heteronuclear dimers consisting of noble gas atoms from He to Kr were calculated within the symmetry adapted perturbation theory based on the density functional theory (DFT-SAPT). These potentials together with spectroscopic data derived from them were compared to previous high-precision coupled cluster with singles and doubles including the connected triples theory calculations (or better if available) as well as to experimental data used as the benchmark. The impact of midbond functions on DFT-SAPT results was tested to study the convergence of the interaction energies. It was shown that, for most of the complexes, DFT-SAPT potential calculated at the complete basis set (CBS) limit is lower than the corresponding benchmark potential in the region near its minimum and hence, spectroscopic accuracy cannot be achieved. The influence of the residual term δ(HF) on the interaction energy was also studied. As a result, we have found that this term improves the agreement with the benchmark in the repulsive region for the dimers considered, but leads to even larger overestimation of potential depth D{sub e}. Although the standard hybrid exchange-correlation (xc) functionals with asymptotic correction within the second order DFT-SAPT do not provide the spectroscopic accuracy at the CBS limit, it is possible to adjust empirically basis sets yielding highly accurate results.

  9. Assessing the Accuracy of Quantitative Molecular Microbial Profiling

    PubMed Central

    O’Sullivan, Denise M.; Laver, Thomas; Temisak, Sasithon; Redshaw, Nicholas; Harris, Kathryn A.; Foy, Carole A.; Studholme, David J.; Huggett, Jim F.

    2014-01-01

    The application of high-throughput sequencing in profiling microbial communities is providing an unprecedented ability to investigate microbiomes. Such studies typically apply one of two methods: amplicon sequencing using PCR to target a conserved orthologous sequence (typically the 16S ribosomal RNA gene) or whole (meta)genome sequencing (WGS). Both methods have been used to catalog the microbial taxa present in a sample and quantify their respective abundances. However, a comparison of the inherent precision or bias of the different sequencing approaches has not been performed. We previously developed a metagenomic control material (MCM) to investigate error when performing different sequencing strategies. Amplicon sequencing using four different primer strategies and two 16S rRNA regions was examined (Roche 454 Junior) and compared to WGS (Illumina HiSeq). All sequencing methods generally performed comparably and in good agreement with organism specific digital PCR (dPCR); WGS notably demonstrated very high precision. Where discrepancies between relative abundances occurred they tended to differ by less than twofold. Our findings suggest that when alternative sequencing approaches are used for microbial molecular profiling they can perform with good reproducibility, but care should be taken when comparing small differences between distinct methods. This work provides a foundation for future work comparing relative differences between samples and the impact of extraction methods. We also highlight the value of control materials when conducting microbial profiling studies to benchmark methods and set appropriate thresholds. PMID:25421243

  10. Accuracy assessment, using stratified plurality sampling, of portions of a LANDSAT classification of the Arctic National Wildlife Refuge Coastal Plain

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1989-01-01

    An application of a classification accuracy assessment procedure is described for a vegetation and land cover map prepared by digital image processing of LANDSAT multispectral scanner data. A statistical sampling procedure called Stratified Plurality Sampling was used to assess the accuracy of portions of a map of the Arctic National Wildlife Refuge coastal plain. Results are tabulated as percent correct classification overall as well as per category with associated confidence intervals. Although values of percent correct were disappointingly low for most categories, the study was useful in highlighting sources of classification error and demonstrating shortcomings of the plurality sampling method.

  11. Efficiency and Accuracy Verification of the Explicit Numerical Manifold Method for Dynamic Problems

    NASA Astrophysics Data System (ADS)

    Qu, X. L.; Wang, Y.; Fu, G. Y.; Ma, G. W.

    2015-05-01

    The original numerical manifold method (NMM) employs an implicit time integration scheme to achieve higher computational accuracy, but its efficiency is relatively low, especially when the open-close iterations of contact are involved. To improve its computational efficiency, a modified version of the NMM based on an explicit time integration algorithm is proposed in this study. The lumped mass matrix, internal force and damping vectors are derived for the proposed explicit scheme. A calibration study on P-wave propagation along a rock bar is conducted to investigate the efficiency and accuracy of the developed explicit numerical manifold method (ENMM) for wave propagation problems. Various considerations in the numerical simulations are discussed, and parametric studies are carried out to obtain an insight into the influencing factors on the efficiency and accuracy of wave propagation. To further verify the capability of the proposed ENMM, dynamic stability assessment for a fractured rock slope under seismic effect is analysed. It is shown that, compared to the original NMM, the computational efficiency of the proposed ENMM can be significantly improved.

  12. Evaluating the Effect of Learning Style and Student Background on Self-Assessment Accuracy

    ERIC Educational Resources Information Center

    Alaoutinen, Satu

    2012-01-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible…

  13. Update and review of accuracy assessment techniques for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Heinen, J. T.; Oderwald, R. G.

    1983-01-01

    Research performed in the accuracy assessment of remotely sensed data is updated and reviewed. The use of discrete multivariate analysis techniques for the assessment of error matrices, the use of computer simulation for assessing various sampling strategies, and an investigation of spatial autocorrelation techniques are examined.

  14. 12 CFR 630.5 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CREDIT SYSTEM General § 630.5 Accuracy of reports and assessment of internal control over financial... assessment of internal control over financial reporting. (1) Annual reports must include a report by the Funding Corporation's management assessing the effectiveness of the internal control over...

  15. Assessment of RFID Read Accuracy for ISS Water Kit

    NASA Technical Reports Server (NTRS)

    Chu, Andrew

    2011-01-01

    The Space Life Sciences Directorate/Medical Informatics and Health Care Systems Branch (SD4) is assessing the benefits Radio Frequency Identification (RFID) technology for tracking items flown onboard the International Space Station (ISS). As an initial study, the Avionic Systems Division Electromagnetic Systems Branch (EV4) is collaborating with SD4 to affix RFID tags to a water kit supplied by SD4 and studying the read success rate of the tagged items. The tagged water kit inside a Cargo Transfer Bag (CTB) was inventoried using three different RFID technologies, including the Johnson Space Center Building 14 Wireless Habitat Test Bed RFID portal, an RFID hand-held reader being targeted for use on board the ISS, and an RFID enclosure designed and prototyped by EV4.

  16. Classification accuracy across multiple tests following item method directed forgetting.

    PubMed

    Goernert, Phillip N; Widner, Robert L; Otani, Hajime

    2007-09-01

    We investigated recall of line-drawing pictures paired at study with an instruction either to remember (TBR items) or to forget (TBF items). Across three 7-minute tests, net recall (items reported independent of accuracy in instructional designation) and correctly classified recall (recall conditional on correct instructional designation) showed directed forgetting. That is, for both measures, recall of TBR items always exceeded recall of TBF items. Net recall for both item types increased across tests at comparable levels showing hypermnesia. However, across tests, correct classification of both item types decreased at comparable levels. Collectively, hypermnesia as measured by net recall is possible for items from multiple sets, but at the cost of accurate source information. PMID:17676551

  17. Assessing Sensor Accuracy for Non-Adjunct Use of Continuous Glucose Monitoring

    PubMed Central

    Patek, Stephen D.; Ortiz, Edward Andrew; Breton, Marc D.

    2015-01-01

    Abstract Background: The level of continuous glucose monitoring (CGM) accuracy needed for insulin dosing using sensor values (i.e., the level of accuracy permitting non-adjunct CGM use) is a topic of ongoing debate. Assessment of this level in clinical experiments is virtually impossible because the magnitude of CGM errors cannot be manipulated and related prospectively to clinical outcomes. Materials and Methods: A combination of archival data (parallel CGM, insulin pump, self-monitoring of blood glucose [SMBG] records, and meals for 56 pump users with type 1 diabetes) and in silico experiments was used to “replay” real-life treatment scenarios and relate sensor error to glycemic outcomes. Nominal blood glucose (BG) traces were extracted using a mathematical model, yielding 2,082 BG segments each initiated by insulin bolus and confirmed by SMBG. These segments were replayed at seven sensor accuracy levels (mean absolute relative differences [MARDs] of 3–22%) testing six scenarios: insulin dosing using sensor values, threshold, and predictive alarms, each without or with considering CGM trend arrows. Results: In all six scenarios, the occurrence of hypoglycemia (frequency of BG levels ≤50 mg/dL and BG levels ≤39 mg/dL) increased with sensor error, displaying an abrupt slope change at MARD =10%. Similarly, hyperglycemia (frequency of BG levels ≥250 mg/dL and BG levels ≥400 mg/dL) increased and displayed an abrupt slope change at MARD=10%. When added to insulin dosing decisions, information from CGM trend arrows, threshold, and predictive alarms resulted in improvement in average glycemia by 1.86, 8.17, and 8.88 mg/dL, respectively. Conclusions: Using CGM for insulin dosing decisions is feasible below a certain level of sensor error, estimated in silico at MARD=10%. In our experiments, further accuracy improvement did not contribute substantively to better glycemic outcomes. PMID:25436913

  18. Accuracy of subjective assessment of fever by Nigerian mothers in under-5 children

    PubMed Central

    Odinaka, Kelechi Kenneth; Edelu, Benedict O.; Nwolisa, Emeka Charles; Amamilo, Ifeyinwa B.; Okolo, Seline N.

    2014-01-01

    Background: Many mothers still rely on palpation to determine if their children have fever at home before deciding to seek medical attention or administer self-medications. This study was carried out to determine the accuracy of subjective assessment of fever by Nigerian mothers in Under-5 Children. Patients and Methods: Each eligible child had a tactile assessment of fever by the mother after which the axillary temperature was measured. Statistical analysis was done using SPSS version 19 (IBM Inc. Chicago Illinois, USA, 2010). Result: A total of 113 mother/child pairs participated in the study. Palpation overestimates fever by 24.6%. Irrespective of the surface of the hand used for palpation, the sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of tactile assessment were 82.4%, 37.1%, 51.9% and 71.9%, respectively. The use of the palmer surface of the hand had a better sensitivity (95.2%) than the dorsum of the hand (69.2%). The use of multiple sites had better sensitivity (86.7%) than the use of single site (76.2%). Conclusion: Tactile assessment of childhood fevers by mothers is still a relevant screening tool for the presence or absence fever. Palpation with the palmer surface of the hand using multiple sites improves the reliability of tactile assessment of fever. PMID:25114371

  19. Geometric calibration and accuracy assessment of a multispectral imager on UAVs

    NASA Astrophysics Data System (ADS)

    Zheng, Fengjie; Yu, Tao; Chen, Xingfeng; Chen, Jiping; Yuan, Guoti

    2012-11-01

    The increasing developments in Unmanned Aerial Vehicles (UAVs) platforms and associated sensing technologies have widely promoted UAVs remote sensing application. UAVs, especially low-cost UAVs, limit the sensor payload in weight and dimension. Mostly, cameras on UAVs are panoramic, fisheye lens, small-format CCD planar array camera, unknown intrinsic parameters and lens optical distortion will cause serious image aberrations, even leading a few meters or tens of meters errors in ground per pixel. However, the characteristic of high spatial resolution make accurate geolocation more critical to UAV quantitative remote sensing research. A method for MCC4-12F Multispectral Imager designed to load on UAVs has been developed and implemented. Using multi-image space resection algorithm to assess geometric calibration parameters of random position and different photogrammetric altitudes in 3D test field, which is suitable for multispectral cameras. Both theoretical and practical accuracy assessments were selected. The results of theoretical strategy, resolving object space and image point coordinate differences by space intersection, showed that object space RMSE were 0.2 and 0.14 pixels in X direction and in Y direction, image space RMSE were superior to 0.5 pixels. In order to verify the accuracy and reliability of the calibration parameters,practical study was carried out in Tianjin UAV flight experiments, the corrected accuracy validated by ground checkpoints was less than 0.3m. Typical surface reflectance retrieved on the basis of geo-rectified data was compared with ground ASD measurement resulting 4% discrepancy. Hence, the approach presented here was suitable for UAV multispectral imager.

  20. Estimating the point accuracy of population registers using capture-recapture methods in Scotland.

    PubMed Central

    Garton, M J; Abdalla, M I; Reid, D M; Russell, I T

    1996-01-01

    STUDY OBJECTIVE: To estimate the point accuracy of adult registration on the community health index (CHI) by comparing it with the electoral register (ER) and the community charge register (CCR). DESIGN: Survey of overlapping samples from three registers to ascertain whether respondents were living at the addresses given on the registers, analysed by capture-recapture methods. SETTING: Aberdeen North and South parliamentary constituencies. PARTICIPANTS: Random samples of adult registrants aged at least 18 years from the CHI (n = 1000), ER (n = 998), and CCR (n = 956). MAIN RESULTS: Estimated sensitivities (the proportions of the target population registered at the address where they live) were: CHI--84.6% (95% confidence limits 82.4%, 86.7%); ER--90.0% (87.5%, 92.5%), and CCR--87.7% (85.3%, 90.3%). Positive predictive values (the proportions of registrants who were living at their stated addresses) were: CHI--84.6% (82.2%, 87.0%); ER--94.0% (90.9%, 97.1%), and CCR--93.7% (91.7%, 95.7%). CONCLUSIONS: The CHI assessed in this study was significantly less sensitive and predictive than the corresponding ER and CCR. Capture-recapture methods are effective in assessing the accuracy of population registers. PMID:8762363

  1. Online Medical Device Use Prediction: Assessment of Accuracy.

    PubMed

    Maktabi, Marianne; Neumuth, Thomas

    2016-01-01

    Cost-intensive units in the hospital such as the operating room require effective resource management to improve surgical workflow and patient care. To maximize efficiency, online management systems should accurately forecast the use of technical resources (medical instruments and devices). We compare several surgical activities like using the coagulator based on spectral analysis and application of a linear time variant system to obtain future technical resource usage. In our study we examine the influence of the duration of usage and total usage rate of the technical equipment to the prediction performance in several time intervals. A cross validation was conducted with sixty-two neck dissections to evaluate the prediction performance. The performance of a use-state-forecast does not change whether duration is considered or not, but decreases with lower total usage rates of the observed instruments. A minimum number of surgical workflow recordings (here: 62) and >5 minute time intervals for use-state forecast are required for applying our described method to surgical practice. The work presented here might support the reduction of resource conflicts when resources are shared among different operating rooms. PMID:27577445

  2. Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies

    PubMed Central

    Gülhan, Orekıcı Temel

    2016-01-01

    Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491

  3. Assessing the accuracy of Landsat Thematic Mapper classification using double sampling

    USGS Publications Warehouse

    Kalkhan, M.A.; Reich, R.M.; Stohlgren, T.J.

    1998-01-01

    Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Moutnain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5% and 32.5%, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6% and 45.6%, respectively.Double sampling was used to provide a cost efficient estimate of the accuracy of a Landsat Thematic Mapper (TM) classification map of a scene located in the Rocky Mountain National Park, Colorado. In the first phase, 200 sample points were randomly selected to assess the accuracy between Landsat TM data and aerial photography. The overall accuracy and Kappa statistic were 49.5 per cent and 32.5 per cent, respectively. In the second phase, 25 sample points identified in the first phase were selected using stratified random sampling and located in the field. This information was used to correct for misclassification errors associated with the first phase samples. The overall accuracy and Kappa statistic increased to 59.6 per cent and 45.6 per cent, respectively.

  4. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    of wind shield improve the accuracy of precipitation measurements? · Try to find the source of the error that can be detected at tipping bucket raingauge in winter time because of use of heating power? On our poster we would like to present the answers to the questions listed above.

  5. Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance

    PubMed Central

    2015-01-01

    The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment. PMID:25691829

  6. Dynamic Assessment of School-Age Children's Narrative Ability: An Experimental Investigation of Classification Accuracy

    ERIC Educational Resources Information Center

    Pena, Elizabeth D.; Gillam, Ronald B.; Malek, Melynn; Ruiz-Felter, Roxanna; Resendiz, Maria; Fiestas, Christine; Sabel, Tracy

    2006-01-01

    Two experiments examined reliability and classification accuracy of a narration-based dynamic assessment task. Purpose: The first experiment evaluated whether parallel results were obtained from stories created in response to 2 different wordless picture books. If so, the tasks and measures would be appropriate for assessing pretest and posttest…

  7. Enhancing the accuracy of knowledge discovery: a supervised learning method

    PubMed Central

    2014-01-01

    Background The amount of biomedical literature available is growing at an explosive speed, but a large amount of useful information remains undiscovered in it. Researchers can make informed biomedical hypotheses through mining this literature. Unfortunately, popular mining methods based on co-occurrence produce too many target concepts, leading to the declining relevance ranking of the potential target concepts. Methods This paper presents a new method for selecting linking concepts which exploits statistical and textual features to represent each linking concept, and then classifies them as relevant or irrelevant to the starting concepts. Relevant linking concepts are then used to discover target concepts. Results Through an evaluation it is observed textual features improve the results obtained with only statistical features. We successfully replicate Swanson's two classic discoveries and find the rankings of potentially relevant target concepts are relatively high. Conclusions The number of target concepts is greatly reduced and potentially relevant target concepts gain higher ranking by adopting only relevant linking concepts. Thus, the proposed method has the potential to help biomedical experts find the most useful and valuable target concepts effectively. PMID:25474584

  8. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    PubMed

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments. PMID:23270978

  9. Diagnostic accuracy of the vegetative and minimally conscious state: Clinical consensus versus standardized neurobehavioral assessment

    PubMed Central

    Schnakers, Caroline; Vanhaudenhuyse, Audrey; Giacino, Joseph; Ventura, Manfredi; Boly, Melanie; Majerus, Steve; Moonen, Gustave; Laureys, Steven

    2009-01-01

    Background Previously published studies have reported that up to 43% of patients with disorders of consciousness are erroneously assigned a diagnosis of vegetative state (VS). However, no recent studies have investigated the accuracy of this grave clinical diagnosis. In this study, we compared consensus-based diagnoses of VS and MCS to those based on a well-established standardized neurobehavioral rating scale, the JFK Coma Recovery Scale-Revised (CRS-R). Methods We prospectively followed 103 patients (55 ± 19 years) with mixed etiologies and compared the clinical consensus diagnosis provided by the physician on the basis of the medical staff's daily observations to diagnoses derived from CRS-R assessments performed by research staff. All patients were assigned a diagnosis of 'VS', 'MCS' or 'uncertain diagnosis.' Results Of the 44 patients diagnosed with VS based on the clinical consensus of the medical team, 18 (41%) were found to be in MCS following standardized assessment with the CRS-R. In the 41 patients with a consensus diagnosis of MCS, 4 (10%) had emerged from MCS, according to the CRS-R. We also found that the majority of patients assigned an uncertain diagnosis by clinical consensus (89%) were in MCS based on CRS-R findings. Conclusion Despite the importance of diagnostic accuracy, the rate of misdiagnosis of VS has not substantially changed in the past 15 years. Standardized neurobehavioral assessment is a more sensitive means of establishing differential diagnosis in patients with disorders of consciousness when compared to diagnoses determined by clinical consensus. PMID:19622138

  10. Accuracy assessment of Kinect for Xbox One in point-based tracking applications

    NASA Astrophysics Data System (ADS)

    Goral, Adrian; Skalski, Andrzej

    2015-12-01

    We present the accuracy assessment of a point-based tracking system built on Kinect v2. In our approach, color, IR and depth data were used to determine the positions of spherical markers. To accomplish this task, we calibrated the depth/infrared and color cameras using a custom method. As a reference tool we used Polaris Spectra optical tracking system. The mean error obtained within the range from 0.9 to 2.9 m was 61.6 mm. Although the depth component of the error turned out to be the largest, the random error of depth estimation was only 1.24 mm on average. Our Kinect-based system also allowed for reliable angular measurements within the range of ±20° from the sensor's optical axis.

  11. A MODEL TO ASSESS THE ACCURACY OF DETECTING ARBOVIRUSES IN MOSQUITO POOLS

    PubMed Central

    VITEK, CHRISTOPHER J.; RICHARDS, STEPHANIE L.; ROBINSON, HEATHER L.; SMARTT, CHELSEA T.

    2009-01-01

    Vigilant surveillance of virus prevalence in mosquitoes is essential for risk assessment and outbreak prediction. Accurate virus detection methods are essential for arbovirus surveillance. We have developed a model to estimate the probability of accurately detecting a virus-positive mosquito from pooled field collections using standard molecular techniques. We discuss several factors influencing the probability of virus detection, including the number of virions in the sample, the total sample volume, and the portion of the sample volume that is being tested. Our model determines the probability of obtaining at least 1 virion in the sample that is tested. The model also determines the optimal sample volume that is required in any test to ensure a desired probability of virus detection is achieved, and can be used to support the accuracy of current tests or to optimize existing techniques. PMID:19852231

  12. Accuracy Assessment of the Integration of GNSS and a MEMS IMU in a Terrestrial Platform

    PubMed Central

    Madeira, Sergio; Yan, Wenlin; Bastos, Luísa; Gonçalves, José A.

    2014-01-01

    MEMS Inertial Measurement Units are available at low cost and can replace expensive units in mobile mapping platforms which need direct georeferencing. This is done through the integration with GNSS measurements in order to achieve a continuous positioning solution and to obtain orientation angles. This paper presents the results of the assessment of the accuracy of a system that integrates GNSS and a MEMS IMU in a terrestrial platform. We describe the methodology used and the tests realized where the accuracy of the positions and orientation parameters were assessed using an independent photogrammetric technique employing cameras that integrate the mobile mapping system developed by the authors. Results for the accuracy of attitude angles and coordinates show that accuracies better than a decimeter in positions, and under a degree in angles, can be achieved even considering that the terrestrial platform is operating in less than favorable environments. PMID:25375757

  13. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  14. Assessing the accuracy of satellite derived global and national urban maps in Kenya.

    PubMed

    Tatem, A J; Noor, A M; Hay, S I

    2005-05-15

    Ninety percent of projected global urbanization will be concentrated in low income countries (United-Nations, 2004). This will have considerable environmental, economic and public health implications for those populations. Objective and efficient methods of delineating urban extent are a cross-sectoral need complicated by a diversity of urban definition rubrics world-wide. Large-area maps of urban extents are becoming increasingly available in the public domain, as are a wide-range of medium spatial resolution satellite imagery. Here we describe the extension of a methodology based on Landsat ETM and Radarsat imagery to the production of a human settlement map of Kenya. This map was then compared with five satellite imagery-derived, global maps of urban extent at Kenya national-level, against an expert opinion coverage for accuracy assessment. The results showed the map produced using medium spatial resolution satellite imagery was of comparable accuracy to the expert opinion coverage. The five global urban maps exhibited a range of inaccuracies, emphasising that care should be taken with use of these maps at national and sub-national scale. PMID:22581985

  15. Quantitative Assessment of Shockwave Lithotripsy Accuracy and the Effect of Respiratory Motion*

    PubMed Central

    Bailey, Michael R.; Shah, Anup R.; Hsi, Ryan S.; Paun, Marla; Harper, Jonathan D.

    2012-01-01

    Abstract Background and Purpose Effective stone comminution during shockwave lithotripsy (SWL) is dependent on precise three-dimensional targeting of the shockwave. Respiratory motion, imprecise targeting or shockwave alignment, and stone movement may compromise treatment efficacy. The purpose of this study was to evaluate the accuracy of shockwave targeting during SWL treatment and the effect of motion from respiration. Patients and Methods Ten patients underwent SWL for the treatment of 13 renal stones. Stones were targeted fluoroscopically using a Healthtronics Lithotron (five cases) or Dornier Compact Delta II (five cases) shockwave lithotripter. Shocks were delivered at a rate of 1 to 2 Hz with ramping shockwave energy settings of 14 to 26 kV or level 1 to 5. After the low energy pretreatment and protective pause, a commercial diagnostic ultrasound (US) imaging system was used to record images of the stone during active SWL treatment. Shockwave accuracy, defined as the proportion of shockwaves that resulted in stone motion with shockwave delivery, and respiratory stone motion were determined by two independent observers who reviewed the ultrasonographic videos. Results Mean age was 51±15 years with 60% men, and mean stone size was 10.5±3.7 mm (range 5–18 mm). A mean of 2675±303 shocks was delivered. Shockwave-induced stone motion was observed with every stone. Accurate targeting of the stone occurred in 60%±15% of shockwaves. Conclusions US imaging during SWL revealed that 40% of shockwaves miss the stone and contribute solely to tissue injury, primarily from movement with respiration. These data support the need for a device to deliver shockwaves only when the stone is in target. US imaging provides real-time assessment of stone targeting and accuracy of shockwave delivery. PMID:22471349

  16. Accuracy of velocity and power determination by the Doppler method

    NASA Technical Reports Server (NTRS)

    Rottger, J.

    1984-01-01

    When designing a Mesosphere-Stratosphere-Troposphere (MST) radar antenna one has to trade between the choices to optimize the effective aperture or to optimize the sidelobe suppression. An optimization of the aperture increases the sensitivity. Suppression of side-lobes by tapering attenuates undesirable signals which spoil the estimates of reflectivity and velocity. Generally, any sidelobe effects are equivalent to a broadening of the antenna beam. The return signal is due to a product of the antenna pattern with the varying atmospheric reflectivity structures. Thus, knowing the antenna pattern, it is in principle possible to find the signal spectra, which, however, may be a tedious computational and ambiguous procedure. For vertically pointing main beams the sidelobe effects are efficiently suppressed because of the aspect sensitivity. It follows that sidelobes are a minor problem for spaced antenna methods. However, they can be crucial for Doppler methods, which need off-vertical beams. If a sidelobe is pointing towards the zenith a larger power may be received from the vertical than off-vertical directions, but quantitative estimates of this effect are not yet known. To get an error estimate of sidelobe effects with an off-vertical main beam a 1-dimensional example is considered.

  17. Methods for evaluating the predictive accuracy of structural dynamic models

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, Jon D.

    1990-01-01

    Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.

  18. Accuracy of sequence alignment and fold assessment using reduced amino acid alphabets.

    PubMed

    Melo, Francisco; Marti-Renom, Marc A

    2006-06-01

    Reduced or simplified amino acid alphabets group the 20 naturally occurring amino acids into a smaller number of representative protein residues. To date, several reduced amino acid alphabets have been proposed, which have been derived and optimized by a variety of methods. The resulting reduced amino acid alphabets have been applied to pattern recognition, generation of consensus sequences from multiple alignments, protein folding, and protein structure prediction. In this work, amino acid substitution matrices and statistical potentials were derived based on several reduced amino acid alphabets and their performance assessed in a large benchmark for the tasks of sequence alignment and fold assessment of protein structure models, using as a reference frame the standard alphabet of 20 amino acids. The results showed that a large reduction in the total number of residue types does not necessarily translate into a significant loss of discriminative power for sequence alignment and fold assessment. Therefore, some definitions of a few residue types are able to encode most of the relevant sequence/structure information that is present in the 20 standard amino acids. Based on these results, we suggest that the use of reduced amino acid alphabets may allow to increasing the accuracy of current substitution matrices and statistical potentials for the prediction of protein structure of remote homologs. PMID:16506243

  19. An accuracy assessment of realtime GNSS time series toward semi- real time seafloor geodetic observation

    NASA Astrophysics Data System (ADS)

    Osada, Y.; Ohta, Y.; Demachi, T.; Kido, M.; Fujimoto, H.; Azuma, R.; Hino, R.

    2013-12-01

    Large interplate earthquake repeatedly occurred in Japan Trench. Recently, the detail crustal deformation revealed by the nation-wide inland GPS network called as GEONET by GSI. However, the maximum displacement region for interplate earthquake is mainly located offshore region. GPS/Acoustic seafloor geodetic observation (hereafter GPS/A) is quite important and useful for understanding of shallower part of the interplate coupling between subducting and overriding plates. We typically conduct GPS/A in specific ocean area based on repeated campaign style using research vessel or buoy. Therefore, we cannot monitor the temporal variation of seafloor crustal deformation in real time. The one of technical issue on real time observation is kinematic GPS analysis because kinematic GPS analysis based on reference and rover data. If the precise kinematic GPS analysis will be possible in the offshore region, it should be promising method for real time GPS/A with USV (Unmanned Surface Vehicle) and a moored buoy. We assessed stability, precision and accuracy of StarFireTM global satellites based augmentation system. We primarily tested for StarFire in the static condition. In order to assess coordinate precision and accuracy, we compared 1Hz StarFire time series and post-processed precise point positioning (PPP) 1Hz time series by GIPSY-OASIS II processing software Ver. 6.1.2 with three difference product types (ultra-rapid, rapid, and final orbits). We also used difference interval clock information (30 and 300 seconds) for the post-processed PPP processing. The standard deviation of real time StarFire time series is less than 30 mm (horizontal components) and 60 mm (vertical component) based on 1 month continuous processing. We also assessed noise spectrum of the estimated time series by StarFire and post-processed GIPSY PPP results. We found that the noise spectrum of StarFire time series is similar pattern with GIPSY-OASIS II processing result based on JPL rapid orbit

  20. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  1. An accuracy measurement method for star trackers based on direct astronomic observation

    NASA Astrophysics Data System (ADS)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  2. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  3. Accuracy assessment of the GPS-TEC calibration constants by means of a simulation technique

    NASA Astrophysics Data System (ADS)

    Conte, Juan Federico; Azpilicueta, Francisco; Brunini, Claudio

    2011-10-01

    During the last 2 decades, Global Positioning System (GPS) measurements have become a very important data-source for ionospheric studies. However, it is not a direct and easy task to obtain accurate ionospheric information from these measurements because it is necessary to perform a careful estimation of the calibration constants affecting the GPS observations, the so-called differential code biases (DCBs). In this paper, the most common approximations used in several GPS calibration methods, e.g. the La Plata Ionospheric Model (LPIM), are applied to a set of specially computed synthetic slant Total Electron Content datasets to assess the accuracy of the DCB estimation in a global scale scenario. These synthetic datasets were generated using a modified version of the NeQuick model, and have two important features: they show a realistic temporal and spatial behavior and all a-priori DCBs are set to zero by construction. Then, after the application of the calibration method the deviations from zero of the estimated DCBs are direct indicators of the accuracy of the method. To evaluate the effect of the solar activity radiation level the analysis was performed for years 2001 (high solar activity) and 2006 (low solar activity). To take into account seasonal changes of the ionosphere behavior, the analysis was repeated for three consecutive days close to each equinox and solstice of every year. Then, a data package comprising 24 days from approximately 200 IGS permanent stations was processed. In order to avoid unwanted geomagnetic storms effects, the selected days correspond to periods of quiet geomagnetic conditions. The most important results of this work are: i) the estimated DCBs can be affected by errors around ±8 TECu for high solar activity and ±3 TECu for low solar activity; and ii) DCB errors present a systematic behavior depending on the modip coordinate, that is more evident for the positive modip region.

  4. Assessment of the Accuracy of Pharmacy Students’ Compounded Solutions Using Vapor Pressure Osmometry

    PubMed Central

    McPherson, Timothy B.

    2013-01-01

    Objective. To assess the effectiveness of using a vapor pressure osmometer to measure the accuracy of pharmacy students’ compounding skills. Design. Students calculated the theoretical osmotic pressure (mmol/kg) of a solution as a pre-laboratory exercise, compared their calculations with actual values, and then attempted to determine the cause of any errors found. Assessment. After the introduction of the vapor pressure osmometer, the first-time pass rate for solution compounding has varied from 85% to 100%. Approximately 85% of students surveyed reported that the instrument was valuable as a teaching tool because it objectively assessed their work and provided immediate formative assessment. Conclusions. This simple technique of measuring compounding accuracy using a vapor pressure osmometer allowed students to see the importance of quality control and assessment in practice for both pharmacists and technicians. PMID:23610476

  5. Spatial distribution of soil heavy metal pollution estimated by different interpolation methods: accuracy and uncertainty analysis.

    PubMed

    Xie, Yunfeng; Chen, Tong-bin; Lei, Mei; Yang, Jun; Guo, Qing-jun; Song, Bo; Zhou, Xiao-yong

    2011-01-01

    Mapping the spatial distribution of contaminants in soils is the basis of pollution evaluation and risk control. Interpolation methods are extensively applied in the mapping processes to estimate the heavy metal concentrations at unsampled sites. The performances of interpolation methods (inverse distance weighting, local polynomial, ordinary kriging and radial basis functions) were assessed and compared using the root mean square error for cross validation. The results indicated that all interpolation methods provided a high prediction accuracy of the mean concentration of soil heavy metals. However, the classic method based on percentages of polluted samples, gave a pollution area 23.54-41.92% larger than that estimated by interpolation methods. The difference in contaminated area estimation among the four methods reached 6.14%. According to the interpolation results, the spatial uncertainty of polluted areas was mainly located in three types of region: (a) the local maxima concentration region surrounded by low concentration (clean) sites, (b) the local minima concentration region surrounded with highly polluted samples; and (c) the boundaries of the contaminated areas. PMID:20970158

  6. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    NASA Astrophysics Data System (ADS)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  7. Assessing the accuracy of the van der Waals density functionals for rare-gas and small molecular systems

    NASA Astrophysics Data System (ADS)

    Callsen, Martin; Hamada, Ikutaro

    2015-05-01

    The precise description of chemical bonds with different natures is a prerequisite for an accurate electronic structure method. The van der Waals density functional is a promising approach that meets such a requirement. Nevertheless, the accuracy should be assessed for a variety of materials to test the robustness of the method. We present benchmark calculations for weakly interacting molecular complexes and rare-gas systems as well as covalently bound molecular systems, in order to assess the accuracy and applicability of rev-vdW-DF2, a recently proposed variant [I. Hamada, Phys. Rev. B 89, 121103 (2014), 10.1103/PhysRevB.89.121103] of the van der Waals density functional. It is shown that although the calculated atomization energies for small molecules are less accurate rev-vdW-DF2 describes the interaction energy curves for the weakly interacting molecules and rare-gas complexes, as well as the bond lengths of diatomic molecules, reasonably well.

  8. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  9. Monte Carlo evaluation of accuracy and noise properties of two scatter correction methods

    SciTech Connect

    Narita, Y. |; Eberl, S.; Nakamura, T.

    1996-12-31

    Two independent scatter correction techniques, transmission dependent convolution subtraction (TDCS) and triple-energy window (TEW) method, were evaluated in terms of quantitative accuracy and noise properties using Monte Carlo simulation (EGS4). Emission projections (primary, scatter and scatter plus primary) were simulated for {sup 99m}Tc and {sup 201}Tl for numerical chest phantoms. Data were reconstructed with ordered-subset ML-EM algorithm including attenuation correction using the transmission data. In the chest phantom simulation, TDCS provided better S/N than TEW, and better accuracy, i.e., 1.0% vs -7.2% in myocardium, and -3.7% vs -30.1% in the ventricular chamber for {sup 99m}Tc with TDCS and TEW, respectively. For {sup 201}Tl, TDCS provided good visual and quantitative agreement with simulated true primary image without noticeably increasing the noise after scatter correction. Overall TDCS proved to be more accurate and less noisy than TEW, facilitating quantitative assessment of physiological functions with SPECT.

  10. Evaluation of Landsat-4 orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1993-01-01

    The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.

  11. Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.

    1991-01-01

    The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.

  12. Comparing the accuracy of several field methods for measuring gully erosion

    NASA Astrophysics Data System (ADS)

    Castillo, C.; Pérez, R.; James, M. R.; Quinton, J. N.; Taguas, E. V.; Gómez, J. A.

    2012-04-01

    Most field erosion studies in agricultural areas provide little information on the probable errors associated. The aim of this paper is to evaluate the accuracy of different methods (LiDAR, photo-reconstruction, total station, laser profilemeter and pole) estimating gully erosion at a reach scale and the expected errors when 2D methods are used at gully scale. Field measurements of a reach 7.1 m long and nine gullies (100s m) were carried out near Cordoba, Spain. At the reach scale, the cross sectional area EA and reach volume EV errors were calculated. Sinuosity and measurement distance (D)influence on gully length error (EL) was investigated. Multiple configurations of gully cross sectional area were simulated to assess volume error variability (σEv) as a function of measurement distance factor (MDF) and to obtain a EV confidence interval for a given probability. 3D photo-reconstruction and total station produced EA values lower than 4%, whereas the remainder of the 2D methods, greater than 10%. For volume estimation, 3D methods deliver similar values, but 2D methods generated large negative EV values (

  13. Assessing Uncertainties in Accuracy of Landuse Classification Using Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Hsiao, L.-H.; Cheng, K.-S.

    2013-05-01

    Multispectral remote sensing images are widely used for landuse/landcover (LULC) classification. Performance of such classification practices is normally evaluated through the confusion matrix which summarizes the producer's and user's accuracies and the overall accuracy. However, the confusion matrix is based on the classification results of a set of multi-class training data. As a result, the classification accuracies are heavily dependent on the representativeness of the training data set and it is imperative for practitioners to assess the uncertainties of LULC classification in order for a full understanding of the classification results. In addition, the Gaussian-based maximum likelihood classifier (GMLC) is widely applied in many practices of LULC classification. The GMLC assumes the classification features jointly form a multivariate normal distribution, whereas as, in reality, many features of individual landcover classes have been found to be non-Gaussian. Direct application of GMLC will certainly affect the classification results. In a pilot study conducted in Taipei and its vicinity, we tackled these two problems by firstly transforming the original training data set to a corresponding data set which forms a multivariate normal distribution before conducting LULC classification using GMLC. We then applied the bootstrap resampling technique to generate a large set of multi-class resampled training data from the multivariate normal training data set. LULC classification was then implemented for each resampled training data set using the GMLC. Finally, the uncertainties of LULC classification accuracies were assessed by evaluating the means and standard deviations of the producer's and user's accuracies of individual LULC classes which were derived from a set of confusion matrices. Results of this study demonstrate that Gaussian-transformation of the original training data achieved better classification accuracies and the bootstrap resampling technique is

  14. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  15. Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.

    2015-02-01

    This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.

  16. An interpolation method for stream habitat assessments

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2015-01-01

    Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.

  17. Accuracy and Variability in Response Methods Used to Determine Object Location Knowledge in the Blind.

    ERIC Educational Resources Information Center

    Haber, Lyn; Haber, Ralph N.

    1992-01-01

    This study evaluated the accuracy of 9 pointing methods used by 20 blind adults. Substantial differences were found, with the most accurate methods involving a body part or extension. The verbal "clockface" was the least accurate and most variable method. The long cane is recommended as a pointing method for adults in applied and research…

  18. Gender Differences in Structured Risk Assessment: Comparing the Accuracy of Five Instruments

    ERIC Educational Resources Information Center

    Coid, Jeremy; Yang, Min; Ullrich, Simone; Zhang, Tianqiang; Sizmur, Steve; Roberts, Colin; Farrington, David P.; Rogers, Robert D.

    2009-01-01

    Structured risk assessment should guide clinical risk management, but it is uncertain which instrument has the highest predictive accuracy among men and women. In the present study, the authors compared the Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 1991, 2003); the Historical, Clinical, Risk Management-20 (HCR-20; C. D. Webster, K. S.…

  19. Gender Differences in the Self-Assessment of Accuracy on Cognitive Tasks.

    ERIC Educational Resources Information Center

    Pallier, Gerry

    2003-01-01

    Examined the effects of gender on the self-assessment of accuracy of visual perceptual judgments. College students completed a test of general knowledge and a visual perceptual task. When results were analyzed by sex, men were more confident than women. Next, people age 17-80 completed tests of cognitive ability. The tendency for men to express…

  20. Assessing the Accuracy of MODIS-NDVI Derived Land-Cover Across the Great Lakes Basin

    EPA Science Inventory

    This research describes the accuracy assessment process for a land-cover dataset developed for the Great Lakes Basin (GLB). This land-cover dataset was developed from the 2007 MODIS Normalized Difference Vegetation Index (NDVI) 16-day composite (MOD13Q) 250 m time-series data. Tr...

  1. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    ERIC Educational Resources Information Center

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  2. A PIXEL COMPOSITION-BASED REFERENCE DATA SET FOR THEMATIC ACCURACY ASSESSMENT

    EPA Science Inventory

    Developing reference data sets for accuracy assessment of land-cover classifications derived from coarse spatial resolution sensors such as MODIS can be difficult due to the large resolution differences between the image data and available reference data sources. Ideally, the spa...

  3. APPLICATION OF A "VITURAL FIELD REFERENCE DATABASE" TO ASSESS LAND-COVER MAP ACCURACIES

    EPA Science Inventory

    An accuracy assessment was performed for the Neuse River Basin, NC land-cover/use
    (LCLU) mapping results using a "Virtual Field Reference Database (VFRDB)". The VFRDB was developed using field measurement and digital imagery (camera) data collected at 1,409 sites over a perio...

  4. Using Attribute Sampling to Assess the Accuracy of a Library Circulation System.

    ERIC Educational Resources Information Center

    Kiger, Jack E.; Wise, Kenneth

    1995-01-01

    Discusses how to use attribute sampling to assess the accuracy of a library circulation system. Describes the nature of sampling, sampling risk, and nonsampling error. Presents nine steps for using attribute sampling to determine the maximum percentage of incorrect records in a circulation system. (AEF)

  5. The Word Writing CAFE: Assessing Student Writing for Complexity, Accuracy, and Fluency

    ERIC Educational Resources Information Center

    Leal, Dorothy J.

    2005-01-01

    The Word Writing CAFE is a new assessment tool designed for teachers to evaluate objectively students' word-writing ability for fluency, accuracy, and complexity. It is designed to be given to the whole class at one time. This article describes the development of the CAFE and provides directions for administering and scoring it. The author also…

  6. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    NASA Astrophysics Data System (ADS)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  7. The accuracy of histological assessments of dental development and age at death

    PubMed Central

    Smith, T M; Reid, D J; Sirianni, J E

    2006-01-01

    Histological analyses of dental development have been conducted for several decades despite few studies assessing the accuracy of such methods. Using known-period incremental features, the crown formation time and age at death of five pig-tailed macaques (Macaca nemestrina) were estimated with standard histological techniques and compared with known ages. Estimates of age at death ranged from 8.6% underestimations to 15.0% overestimations, with an average 3.5% overestimate and a 7.2% average absolute difference. Several sources of error were identified relating to preparation quality and section obliquity. These results demonstrate that histological analyses of dental development involving counts and measurements of short- and long-period incremental features may yield accurate estimates, particularly in well-prepared material. Values from oblique sections (or most naturally fractured teeth) should be regarded with caution, as obliquity leads to inflated cuspal enamel formation time and underestimated imbricational formation time. Additionally, Shellis's formula for extension rate and crown formation time estimation was tested, which significantly overestimated crown formation time due to underestimated extension rate. It is suggested that Shellis' method should not be applied to teeth with short, rapid periods of development, and further study is necessary to validate this application in other material. PMID:16420385

  8. Accuracy assessment of satellite altimetry over central East Antarctica by kinematic GNSS and crossover analysis

    NASA Astrophysics Data System (ADS)

    Schröder, Ludwig; Richter, Andreas; Fedorov, Denis; Knöfel, Christoph; Ewert, Heiko; Dietrich, Reinhard; Matveev, Aleksey Yu.; Scheinert, Mirko; Lukin, Valery

    2014-05-01

    Satellite altimetry is a unique technique to observe the contribution of the Antarctic ice sheet to global sea-level change. To fulfill the high quality requirements for its application, the respective products need to be validated against independent data like ground-based measurements. Kinematic GNSS provides a powerful method to acquire precise height information along the track of a vehicle. Within a collaboration of TU Dresden and Russian partners during the Russian Antarctic Expeditions in the seasons from 2001 to 2013 we recorded several such profiles in the region of the subglacial Lake Vostok, East Antarctica. After 2006 these datasets also include observations along seven continental traverses with a length of about 1600km each between the Antarctic coast and the Russian research station Vostok (78° 28' S, 106° 50' E). After discussing some special issues concerning the processing of the kinematic GNSS profiles under the very special conditions of the interior of the Antarctic ice sheet, we will show their application for the validation of NASA's laser altimeter satellite mission ICESat and of ESA's ice mission CryoSat-2. Analysing the height differences at crossover points, we can get clear insights into the height regime at the subglacial Lake Vostok. Thus, these profiles as well as the remarkably flat lake surface itself can be used to investigate the accuracy and possible error influences of these missions. We will show how the transmit-pulse reference selection correction (Gaussian vs. centroid, G-C) released in January 2013 helped to further improve the release R633 ICESat data and discuss the height offsets and other effects of the CryoSat-2 radar data. In conclusion we show that only a combination of laser and radar altimetry can provide both, a high precision and a good spatial coverage. An independent validation with ground-based observations is crucial for a thorough accuracy assessment.

  9. Potential of accuracy profile for method validation in inductively coupled plasma spectrochemistry

    NASA Astrophysics Data System (ADS)

    Mermet, J. M.; Granier, G.

    2012-10-01

    Method validation is usually performed over a range of concentrations for which analytical criteria must be verified. One important criterion in quantitative analysis is accuracy, i.e. the contribution of both trueness and precision. The study of accuracy over this range is called an accuracy profile and provides experimental tolerance intervals. Comparison with acceptability limits fixed by the end user defines a validity domain. This work describes the computation involved in the building of the tolerance intervals, particularly for the intermediate precision with within-laboratory experiments and for the reproducibility with interlaboratory studies. Computation is based on ISO 5725-4 and on previously published work. Moreover, the bias uncertainty is also computed to verify the bias contribution to accuracy. The various types of accuracy profile behavior are exemplified with results obtained by using ICP-MS and ICP-AES. This procedure allows the analyst to define unambiguously a validity domain for a given accuracy. However, because the experiments are time-consuming, the accuracy profile method is mainly dedicated to method validation.

  10. Statistical downscaling of precipitation using local regression and high accuracy surface modeling method

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Yue, Tianxiang; Zhou, Xun; Zhao, Mingwei; Liu, Yu; Du, Zhengping; Zhang, Lili

    2016-03-01

    Downscaling precipitation is required in local scale climate impact studies. In this paper, a statistical downscaling scheme was presented with a combination of geographically weighted regression (GWR) model and a recently developed method, high accuracy surface modeling method (HASM). This proposed method was compared with another downscaling method using the Coupled Model Intercomparison Project Phase 5 (CMIP5) database and ground-based data from 732 stations across China for the period 1976-2005. The residual which was produced by GWR was modified by comparing different interpolators including HASM, Kriging, inverse distance weighted method (IDW), and Spline. The spatial downscaling from 1° to 1-km grids for period 1976-2005 and future scenarios was achieved by using the proposed downscaling method. The prediction accuracy was assessed at two separate validation sites throughout China and Jiangxi Province on both annual and seasonal scales, with the root mean square error (RMSE), mean relative error (MRE), and mean absolute error (MAE). The results indicate that the developed model in this study outperforms the method that builds transfer function using the gauge values. There is a large improvement in the results when using a residual correction with meteorological station observations. In comparison with other three classical interpolators, HASM shows better performance in modifying the residual produced by local regression method. The success of the developed technique lies in the effective use of the datasets and the modification process of the residual by using HASM. The results from the future climate scenarios show that precipitation exhibits overall increasing trend from T1 (2011-2040) to T2 (2041-2070) and T2 to T3 (2071-2100) in RCP2.6, RCP4.5, and RCP8.5 emission scenarios. The most significant increase occurs in RCP8.5 from T2 to T3, while the lowest increase is found in RCP2.6 from T2 to T3, increased by 47.11 and 2.12 mm, respectively.

  11. Subglacial bedform orientation, one-dimensional size, and directional shape measurement method accuracy

    NASA Astrophysics Data System (ADS)

    Jorge, Marco G.; Brennand, Tracy A.

    2016-04-01

    This study is an assessment of previously reported automated methods and of a new method for measuring longitudinal subglacial bedform (LSB) morphometry. It evaluates the adequacy (accuracy and precision) of orientation, length and longitudinal asymmetry data derived from the longest straight line (LSL) enclosed by the LSB's footprint, the footprint's minimum bounding rectangle longitudinal axis (RLA) and the footprint's standard deviational ellipse (SDE) longitudinal axis (LA) (new method), and the adequacy of length based on an ellipse fitted to the area and perimeter of the footprint (elliptical length). Tests are based on 100 manually mapped drumlins and mega-scale glacial lineations representing the size and shape range of LSBs in the Puget Lowland drumlin field, WA, USA. Data from manually drawn LAs are used as reference for method evaluation. With the exception of elliptical length, errors decrease rapidly with increasing footprint elongation (decreasing potential angular divergence between LAs). For LSBs with elongation <5 and excluding the 5% largest errors (outliers), 1) the LSL, RLA and SDE methods had very small mean absolute error (MAE) in all measures (e.g., MAE <5° in orientation and <5 m in length); they can be confidently used to characterize the central tendency of LSB samples. 2) When analyzing data spatially at large cartographic scales, the LSL method should be avoided for orientation (36% of the errors were larger than 5°). 3) Elliptical length was the least accurate of all methods (MAE of 56.1 m and 15% of the errors larger than 5%); its use should be discontinued. 4) The relative adequacy of the LSL and RLA depends on footprint shape; SDE computed with the footprint's structural vertices is relatively shape-independent and is the preferred method. This study is significant also for negative-relief, and fluvial and aeolian bedforms.

  12. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m‑2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  13. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  14. Accuracy assessment of the axial images obtained from cone beam computed tomography

    PubMed Central

    Panzarella, FK; Junqueira, JLC; Oliveira, LB; de Araújo, NS; Costa, C

    2011-01-01

    Objective The aim of this study was to evaluate accuracy of linear measurements assessed from axial tomograms and the influence of the use of different protocols in two cone beam CT (CBCT) units. Methods A cylinder object consisting of Nylon® (Day Brazil, Sao Paulo, Brazil) with radiopaque markers was radiographically examined applying different protocols from NewTom 3GTM (Quantitative Radiology s.r.l, Verona, Veneto, Italy) and i-CATTM (Imaging Sciences International, Hatfield, PA) units. Horizontal (A–B) and vertical (C–D) distances were assessed from axial tomograms and measured using a digital calliper that provided the gold standard for actual values. Results There were differences when considering acquisition protocols to each CBCT unit. Concerning all analysed protocols from i-CATTM and Newtom 3GTM, both A–B and C–D distances presented underestimated values. Measurements of the axial images obtained from NewTom 3GTM (6 inch 0.16 mm and 9 inch 0.25 mm) were similar to the ones obtained from i-CATTM (13 cm 20 s 0.3 mm, 13 cm 20 s 0.4 mm and 13 cm 40 s 0.25 mm). Conclusion The use of different protocols from CBCT machines influences linear measurements assessed from axial images. Linear distances were underestimated in both equipments. Our findings suggest that the best protocol for the i-CATTM is 13 cm 20 s 0.3 mm and for the NewTom 3GTM, the use of 6 inch or 9 inch is recommended. PMID:21831977

  15. Limb volume measurements: comparison of accuracy and decisive parameters of the most used present methods.

    PubMed

    Chromy, Adam; Zalud, Ludek; Dobsak, Petr; Suskevic, Igor; Mrkvicova, Veronika

    2015-01-01

    Limb volume measurements are used for evaluating growth of muscle mass and effectivity of strength training. Beside sport sciences, it is used e.g. for detection of oedemas, lymphedemas or carcinomas or for examinations of muscle atrophy. There are several commonly used methods, but there is a lack of clear comparison, which shows their advantages and limits. The accuracy of each method is uncertainly estimated only. The aim of this paper is to determine and experimentally verify their accuracy and compare them among each other. Water Displacement Method (WD), three methods based on circumferential measures-Frustum Sign Model (FSM), Disc Model (DM), Partial Frustum Model (PFM) and two 3D scan based methods Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) were compared. Precise reference cylinders and limbs of two human subjects were measured 10 times by each method. Personal dependency of methods was also tested by measuring 10 times the same object by 3 different people. Accuracies: WD 0.3 %, FSM 2-8 % according person, DM, PFM 1-8 %, MRI 2 % (hand) or 8 % (finger), CT 0.5 % (hand) or 2 % (finger);times: FSM 1 min, CT 7 min, WD, DM, PFM 15 min, MRI 19 min; and more. WD was found as the best method for most of uses with best accuracy. The CT disposes with almost the same accuracy and allows measurements of specific regions (e.g. particular muscles), as same as MRI, which accuracy is worse though, but it is not harmful. Frustum Sign Model is usable for very fast estimation of limb volume, but with lower accuracy, Disc Model and Partial Frustum Model is useful in cases when Water Displacement cannot be used. PMID:26618096

  16. Accuracy of Assessment of Eligibility for Early Medical Abortion by Community Health Workers in Ethiopia, India and South Africa

    PubMed Central

    Nguyen, My Huong; Habib, Ndema; Afework, Mesganaw Fantahun; Harries, Jane; Iyengar, Kirti; Moodley, Jennifer; Constant, Deborah; Sen, Swapnaleen

    2016-01-01

    Objective To assess the accuracy of assessment of eligibility for early medical abortion by community health workers using a simple checklist toolkit. Design Diagnostic accuracy study. Setting Ethiopia, India and South Africa. Methods Two hundred seventeen women in Ethiopia, 258 in India and 236 in South Africa were enrolled into the study. A checklist toolkit to determine eligibility for early medical abortion was validated by comparing results of clinician and community health worker assessment of eligibility using the checklist toolkit with the reference standard exam. Results Accuracy was over 90% and the negative likelihood ratio <0.1 at all three sites when used by clinician assessors. Positive likelihood ratios were 4.3 in Ethiopia, 5.8 in India and 6.3 in South Africa. When used by community health workers the overall accuracy of the toolkit was 92% in Ethiopia, 80% in India and 77% in South Africa negative likelihood ratios were 0.08 in Ethiopia, 0.25 in India and 0.22 in South Africa and positive likelihood ratios were 5.9 in Ethiopia and 2.0 in India and South Africa. Conclusion The checklist toolkit, as used by clinicians, was excellent at ruling out participants who were not eligible, and moderately effective at ruling in participants who were eligible for medical abortion. Results were promising when used by community health workers particularly in Ethiopia where they had more prior experience with use of diagnostic aids and longer professional training. The checklist toolkit assessments resulted in some participants being wrongly assessed as eligible for medical abortion which is an area of concern. Further research is needed to streamline the components of the tool, explore optimal duration and content of training for community health workers, and test feasibility and acceptability. PMID:26731176

  17. A Comparison of the Accuracy of Four Age Estimation Methods Based on Panoramic Radiography of Developing Teeth

    PubMed Central

    Javadinejad, Shahrzad; Sekhavati, Hajar; Ghafari, Roshanak

    2015-01-01

    Background and aims. Tooth development is widely used in determining age and state of maturity. Dental age is of high importance in forensic and pediatric dentistry and also orthodontic treatment planning .The aim of this study was to compare the accuracy of four radiographic age estimation methods. Materials and methods. Orthopantomographic images of 537 healthy children (age: 3.9-14.5 years old) were evaluated. Dental age of the subjects was determined through Demirjian’s, Willem’s, Cameriere’s, and Smith’s methods. Differences and correlations between chronological and dental ages were assessed by paired t-tests and Pearson’s correlation analysis, respectively. Results. The mean chronological age of the subjects was 8.93 ± 2.04 years. Overestimations of age were observed following the use of Demirjian’s method (0.87 ± 1.00 years), Willem’s method (0.36 ± 0.87 years), and Smith’s method (0.06 ± 0.63 years). However, Cameriere’s method underestimated age by 0.19 ± 0.86 years. While paired t-tests revealed significant differences between the mean chronological age and ages determined by Demirjian’s, Willem’s, and Cameriere’s methods (P < 0.001), such a significant difference was absent between chronological age and dental age based on Smith’s method (P = 0.079). Pearson’s correlation analysis suggested linear correlations between chronological age and dental age determined by all four methods. Conclusion. Our findings indicated Smith’s method to have the highest accuracy among the four assessed methods. How-ever, all four methods can be used with acceptable accuracy. PMID:26236431

  18. Large Area Crop Inventory Experiment (LACIE). Accuracy assessment report phase 1A, November - December 1974. [Kansas

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The author has identified the following significant results. Results of the accuracy assessment activity for Phase IA of LACIE indicated that (1) The 90/90 criteria could be reached if the degree of accuracy of the LACIE performance in Kansas could be equaled in other areas. (2) The classification of both wheat and nonwheat fields was significantly accurate for the three ITS segments analyzed. The wheat field classification accuracy varied for the segments. However, this was not so with respect to nonwheat fields. (3) Biophase as well as its interaction with segment location turned out to be an important factor for the classification performance. Analyst interpretation of segments for training the classifier was a significant error-contributing factor in the estimation of wheat acreage at both the field and the segment levels.

  19. An Automated Grass-Based Procedure to Assess the Geometrical Accuracy of the Openstreetmap Paris Road Network

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Minghini, M.; Molinari, M. E.

    2016-06-01

    OpenStreetMap (OSM) is the largest spatial database of the world. One of the most frequently occurring geospatial elements within this database is the road network, whose quality is crucial for applications such as routing and navigation. Several methods have been proposed for the assessment of OSM road network quality, however they are often tightly coupled to the characteristics of the authoritative dataset involved in the comparison. This makes it hard to replicate and extend these methods. This study relies on an automated procedure which was recently developed for comparing OSM with any road network dataset. It is based on three Python modules for the open source GRASS GIS software and provides measures of OSM road network spatial accuracy and completeness. Provided that the user is familiar with the authoritative dataset used, he can adjust the values of the parameters involved thanks to the flexibility of the procedure. The method is applied to assess the quality of the Paris OSM road network dataset through a comparison against the French official dataset provided by the French National Institute of Geographic and Forest Information (IGN). The results show that the Paris OSM road network has both a high completeness and spatial accuracy. It has a greater length than the IGN road network, and is found to be suitable for applications requiring spatial accuracies up to 5-6 m. Also, the results confirm the flexibility of the procedure for supporting users in carrying out their own comparisons between OSM and reference road datasets.

  20. On the convergence and accuracy of the cardiovascular intrinsic frequency method

    PubMed Central

    Tavallali, Peyman; Hou, Thomas Y.; Rinderknecht, Derek G.; Pahlevan, Niema M.

    2015-01-01

    In this paper, we analyse the convergence, accuracy and stability of the intrinsic frequency (IF) method. The IF method is a descendant of the sparse time frequency representation methods. These methods are designed for analysing nonlinear and non-stationary signals. Specifically, the IF method is created to address the cardiovascular system that by nature is a nonlinear and non-stationary dynamical system. The IF method is capable of handling specific nonlinear and non-stationary signals with less mathematical regularity. In previous works, we showed the clinical importance of the IF method. There, we showed that the IF method can be used to evaluate cardiovascular performance. In this article, we will present further details of the mathematical background of the IF method by discussing the convergence and the accuracy of the method with and without noise. It will be shown that the waveform fit extracted from the signal is accurate even in the presence of noise. PMID:27019733

  1. Electronic and Courier Methods of Information Dissemination: A Test of Accuracy.

    ERIC Educational Resources Information Center

    DeWine, Sue; And Others

    As part of a larger endeavor to evaluate the impact of communication technology on organizations, this study assesses the accuracy of information diffusion via electronic-mail and courier-mail systems in two large organizations which have implemented electronic-mail systems in the last three years. Data were obtained through the use of…

  2. Assessment of accuracy of suicide mortality surveillance data in South Africa: investigation in an urban setting.

    PubMed

    Burrows, Stephanie; Laflamme, Lucie

    2007-01-01

    Although it is not a legal requirement in South Africa, medical practitioners determine the manner of injury death for a surveillance system that is currently the only source of epidemiological data on suicide. This study assessed the accuracy of suicide data as recorded in the system using the docket produced from standard medico-legal investigation procedures as the gold standard. It was conducted in one of three cities where the surveillance system had full coverage for the year 2000. In the medico-legal system, one-third of cases could not be tracked, had not been finalized, or had unclear outcomes. For the remaining cases, the sensitivity, specificity, and positive and negative predictive values were generally high, varying somewhat across sex and race groups. Poisoning, jumping, and railway suicides were more likely than other methods to be misclassified, and were more common among females and Whites. The study provides encouraging results regarding the use of medical practitioner expertise for the accurate determination of suicide deaths. However, suicides may still be underestimated in this process given the challenge of tracing disguised suicides and without the careful examination of potential misclassifications of true suicides as unintentional deaths. PMID:17722688

  3. Mapping soil texture classes and optimization of the result by accuracy assessment

    NASA Astrophysics Data System (ADS)

    Laborczi, Annamária; Takács, Katalin; Bakacsi, Zsófia; Szabó, József; Pásztor, László

    2014-05-01

    There are increasing demands nowadays on spatial soil information in order to support environmental related and land use management decisions. The GlobalSoilMap.net (GSM) project aims to make a new digital soil map of the world using state-of-the-art and emerging technologies for soil mapping and predicting soil properties at fine resolution. Sand, silt and clay are among the mandatory GSM soil properties. Furthermore, soil texture class information is input data of significant agro-meteorological and hydrological models. Our present work aims to compare and evaluate different digital soil mapping methods and variables for producing the most accurate spatial prediction of texture classes in Hungary. In addition to the Hungarian Soil Information and Monitoring System as our basic data, digital elevation model and its derived components, geological database, and physical property maps of the Digital Kreybig Soil Information System have been applied as auxiliary elements. Two approaches have been applied for the mapping process. At first the sand, silt and clay rasters have been computed independently using regression kriging (RK). From these rasters, according to the USDA categories, we have compiled the texture class map. Different combinations of reference and training soil data and auxiliary covariables have resulted several different maps. However, these results consequentially include the uncertainty factor of the three kriged rasters. Therefore we have suited data mining methods as the other approach of digital soil mapping. By working out of classification trees and random forests we have got directly the texture class maps. In this way the various results can be compared to the RK maps. The performance of the different methods and data has been examined by testing the accuracy of the geostatistically computed and the directly classified results. We have used the GSM methodology to assess the most predictive and accurate way for getting the best among the

  4. Method for estimating dynamic EM tracking accuracy of surgical navigation tools

    NASA Astrophysics Data System (ADS)

    Nafis, Christopher; Jensen, Vern; Beauregard, Lee; Anderson, Peter

    2006-03-01

    Optical tracking systems have been used for several years in image guided medical procedures. Vendors often state static accuracies of a single retro-reflective sphere or LED. Expensive coordinate measurement machines (CMM) are used to validate the positional accuracy over the specified working volume. Users are interested in the dynamic accuracy of their tools. The configuration of individual sensors into a unique tool, the calibration of the tool tip, and the motion of the tool contribute additional errors. Electromagnetic (EM) tracking systems are considered an enabling technology for many image guided procedures because they are not limited by line-of-sight restrictions, take minimum space in the operating room, and the sensors can be very small. It is often difficult to quantify the accuracy of EM trackers because they can be affected by field distortion from certain metal objects. Many high-accuracy measurement devices can affect the EM measurements being validated. EM Tracker accuracy tends to vary over the working volume and orientation of the sensors. We present several simple methods for estimating the dynamic accuracy of EM tracked tools. We discuss the characteristics of the EM Tracker used in the GE Healthcare family of surgical navigation systems. Results for other tracking systems are included.

  5. Evaluating the effect of learning style and student background on self-assessment accuracy

    NASA Astrophysics Data System (ADS)

    Alaoutinen, Satu

    2012-06-01

    This study evaluates a new taxonomy-based self-assessment scale and examines factors that affect assessment accuracy and course performance. The scale is based on Bloom's Revised Taxonomy and is evaluated by comparing students' self-assessment results with course performance in a programming course. Correlation has been used to reveal possible connections between student information and both self-assessment and course performance. The results show that students can place their knowledge along the taxonomy-based scale quite well and the scale seems to fit engineering students' learning style. Advanced students assess themselves more accurately than novices. The results also show that reflective students were better in programming than active. The scale used in this study gives a more objective picture of students' knowledge than general scales and with modifications it can be used in other classes than programming.

  6. On accuracy of holographic shape measurement method with spherical wave illumination

    NASA Astrophysics Data System (ADS)

    Mikuła, Marta; Kozacki, Tomasz; Kostencka, Julianna; LiŻewski, Kamil; Józwik, Michał

    2014-11-01

    This paper presents the study on the accuracy of topography measurement of high numerical aperture focusing microobjects in digital holographic microscope setup. The system works in reflective configuration with spherical wave illumination. For numerical reconstruction of topography of high NA focusing microobjects we are using two algorithms: Thin Element Approximation (TEA) and Spherical Local Ray Approximation (SLRA). In this paper we show comparison of the accuracy of topography reconstruction results using these algorithms. We show superiority of SLRA method. However, to obtain accurate results two experimental conditions have to be determined: the position of point source (PS) and imaging reference plane (IRP).Therefore we simulate the effect of point source (PS) and imaging reference plane (IRP) position on the accuracy of shape calculation. Moreover we evaluate accuracy of determination of location of PS and IRP and finally present measurement result of microlens object.

  7. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  8. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  9. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  10. Accuracy assessment of topographic mapping using UAV image integrated with satellite images

    NASA Astrophysics Data System (ADS)

    Azmi, S. M.; Ahmad, Baharin; Ahmad, Anuar

    2014-02-01

    Unmanned Aerial Vehicle or UAV is extensively applied in various fields such as military applications, archaeology, agriculture and scientific research. This study focuses on topographic mapping and map updating. UAV is one of the alternative ways to ease the process of acquiring data with lower operating costs, low manufacturing and operational costs, plus it is easy to operate. Furthermore, UAV images will be integrated with QuickBird images that are used as base maps. The objective of this study is to make accuracy assessment and comparison between topographic mapping using UAV images integrated with aerial photograph and satellite image. The main purpose of using UAV image is as a replacement for cloud covered area which normally exists in aerial photograph and satellite image, and for updating topographic map. Meanwhile, spatial resolution, pixel size, scale, geometric accuracy and correction, image quality and information contents are important requirements needed for the generation of topographic map using these kinds of data. In this study, ground control points (GCPs) and check points (CPs) were established using real time kinematic Global Positioning System (RTK-GPS) technique. There are two types of analysis that are carried out in this study which are quantitative and qualitative assessments. Quantitative assessment is carried out by calculating root mean square error (RMSE). The outputs of this study include topographic map and orthophoto. From this study, the accuracy of UAV image is ± 0.460 m. As conclusion, UAV image has the potential to be used for updating of topographic maps.

  11. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  12. Accuracy of the orthopantomogram in assessment of tooth length in orthodontic patients.

    PubMed

    Lien, L C; Soh, G

    2000-12-01

    The orthopantomogram (OPG) provides as assessment of root length and characteristics before orthodontic tooth movement. This study determined the accuracy of the OPG in assessing tooth length. Investigators compared the radiographic and actual tooth lengths in permanent first premolars indicated for orthodontic extractions. Results showed that the mean lengths measured from OPG were consistently higher than the actual lengths by 22% (p < 0.001) for maxillary teeth and by 1% for mandibular teeth. This study found that there is elongation of root images in OPG. PMID:11699368

  13. Positioning accuracy assessment for the 4GEO/5IGSO/2MEO constellation of COMPASS

    NASA Astrophysics Data System (ADS)

    Zhou, ShanShi; Cao, YueLing; Zhou, JianHua; Hu, XiaoGong; Tang, ChengPan; Liu, Li; Guo, Rui; He, Feng; Chen, JunPing; Wu, Bin

    2012-12-01

    Determined to become a new member of the well-established GNSS family, COMPASS (or BeiDou-2) is developing its capabilities to provide high accuracy positioning services. Two positioning modes are investigated in this study to assess the positioning accuracy of COMPASS' 4GEO/5IGSO/2MEO constellation. Precise Point Positioning (PPP) for geodetic users and real-time positioning for common navigation users are utilized. To evaluate PPP accuracy, coordinate time series repeatability and discrepancies with GPS' precise positioning are computed. Experiments show that COMPASS PPP repeatability for the east, north and up components of a receiver within mainland China is better than 2 cm, 2 cm and 5 cm, respectively. Apparent systematic offsets of several centimeters exist between COMPASS precise positioning and GPS precise positioning, indicating errors remaining in the treatments of COMPASS measurement and dynamic models and reference frame differences existing between two systems. For common positioning users, COMPASS provides both open and authorized services with rapid differential corrections and integrity information available to authorized users. Our assessment shows that in open service positioning accuracy of dual-frequency and single-frequency users is about 5 m and 6 m (RMS), respectively, which may be improved to about 3 m and 4 m (RMS) with the addition of differential corrections. Less accurate Signal In Space User Ranging Error (SIS URE) and Geometric Dilution of Precision (GDOP) contribute to the relatively inferior accuracy of COMPASS as compared to GPS. Since the deployment of the remaining 1 GEO and 2 MEO is not able to significantly improve GDOP, the performance gap could only be overcome either by the use of differential corrections or improvement of the SIS URE, or both.

  14. The analysis accuracy assessment of CORINE land cover in the Iberian coast

    NASA Astrophysics Data System (ADS)

    Grullón, Yraida R.; Alhaddad, Bahaaeddin; Cladera, Josep R.

    2009-09-01

    Corine land cover 2000 (CLC2000) is a project jointly managed by the Joint Research Centre (JRC) and the European Environment Agency (EEA). Its aim is to update the Corine land cover database in Europe for the year 2000. Landsat-7 Enhanced Thematic Mapper (ETM) satellite images were used for the update and were acquired within the framework of the Image2000 project. Knowledge of the land status through the use of mapping CORINE Land Cover is of great importance to study of interaction land cover and land use categories in Europe scale. This paper presents the accuracy assessment methodology designed and implemented to validate the Iberian Coast CORINE Land Cover 2000 cartography. It presents an implementation of a new methodological concept for land cover data production, Object- Based classification, and automatic generalization to assess the thematic accuracy of CLC2000 by means of an independent data source based on the comparison of the land cover database with reference data derived from visual interpretation of high resolution satellite imageries for sample areas. In our case study, the existing Object-Based classifications are supported with digital maps and attribute databases. According to the quality tests performed, we computed the overall accuracy, and Kappa Coefficient. We will focus on the development of a methodology based on classification and generalization analysis for built-up areas that may improve the investigation. This study can be divided in these fundamental steps: -Extract artificial areas from land use Classifications based on Land-sat and Spot images. -Manuel interpretation for high resolution of multispectral images. -Determine the homogeneity of artificial areas by generalization process. -Overall accuracy, Kappa Coefficient and Special grid (fishnet) test for quality test. Finally, this paper will concentrate to illustrate the precise accuracy of CORINE dataset based on the above general steps.

  15. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  16. How Can We Evaluate the Accuracy of Small Stream Maps? -Focusing on Sampling Method and Statistical Analysis -

    NASA Astrophysics Data System (ADS)

    Park, J.

    2010-12-01

    The Washington State Department of Natural Resources’ (DNR) Forest Practices Habitat Conservation Plan (FPHCP) requires establishment of riparian management zones (RMZs) or equipment limitation zones (ELZs). In order to establish RMZs and ELZs, the DNR is required to update GIS-based stream maps showing the locations of type Ns (Non-fish seasonal) streams as well as type S (Shorelines of the state), type F (Fish habitat), and type Np (Non-fish perennial) streams. While there are few disputes over the positional accuracy of large streams, the representation of small streams such as Ns and small type S or F streams (less than 10’ width) have been considered to need more improvement of their positional accuracy. Numerous remotely sensed stream-mapping methods have been developed in the last several decades that use an array of remote sensing data such as aerial photography, satellite optical imagery, and Digital Elevation Model (DEM) topographic data. While the positional accuracy of the final stream map products has been considered essential to determine the map quality, the estimation or comparison of the positional accuracy of small stream map products has not been well studied, and rarely attempted by remotely sensed stream map developers. Assessments of the positional accuracy of stream maps are not covered properly because it is not easy to acquire the field reference data, especially for small streams under the canopy located in remote forest areas. More importantly, as of this writing, we are not aware of any prominent method to estimate or compare the positional accuracy of stream maps. Since general positional accuracy assessment methods for remotely sensed map products are designed for at least two dimensional features, they are not suitable for linear features such as streams. Due to the difficulties inherent in stream features, estimation methods for stream maps' accuracy have not dealt with the positional accuracy itself but the hydrological

  17. Diagnostic accuracy of Magnetic Resonance Imaging in assessment of Meniscal and ACL tear: Correlation with arthroscopy

    PubMed Central

    Yaqoob, Jamal; Alam, Muhammad Shahbaz; Khalid, Nadeem

    2015-01-01

    Objective: To determine the diagnostic accuracy of magnetic resonance imaging (MRI) in injuries related to anterior cruciate ligament and menisci and compare its effectiveness with that of arthroscopy. Methods: This retrospective cross-sectional study was conducted in the department of Radiology & Medical Imaging of Dallah Hospital, Riyadh, Kingdom of Saudi Arabia from September 2012 to March 2014. Fifty four patients (including 30 men and 24 women) with internal derangement of knee referred from the orthopedic consulting clinics underwent MR imaging followed by arthroscopic evaluation. The presence of meniscal and ligamentous abnormality on the imaging was documented by two trained radiologist. Findings were later compared with arthroscopic findings. Results: The sensitivity, specificity and accuracy of MR imaging for menisci and ACL injury were calculated: 100% sensitivity, 88.4% specificity, 90% positive predictive value, 100% negative predictive value, and 94.4% accuracy were noted for medial meniscal injury. Similarly, MR had sensitivity of 85.7%, specificity of 95%, positive predictive value of 85.7%, negative predictive value of 95%, and accuracy of 92.5% for lateral meniscal injuries. Likewise, anterior cruciate ligament had 91.6% sensitivity, 95.2% specificity, 84.6% positive predictive value, 97.5% negative predictive value, and 94.4% accuracy. Conclusion: MRI is extremely helpful in identifying meniscal and anterior cruciate ligaments tears. MR imaging has high negative predictive value making it better choice as screening tool compared to diagnostic arthroscopic evaluation in most patients with soft tissue trauma to knee. PMID:26101472

  18. A Method to Improve Mineral Identification Accuracy Based on Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Wang, Y. J.; Lin, Q. Z.; Wang, Q. J.; Chen, Y.

    2014-03-01

    To improve the mineral identification accuracy of the rapid quantificational identification model, the noise was filtered in fragment based on the wavelength of altered mineral absorption peak and the regional spectral library that fitted for the study area was established. The filtered spectra were analyzed by the method with regional spectral library. Compared with the originally mineral identification result, the average efficiency rate was improved by 5.1%; the average accuracy rate was improved by 17.7%. The results were optimized by the method based on the position of the altered mineral absorption peak. The average efficiency rate would be improved in the future to identify more accurate minerals.

  19. Accuracies and conservation errors of various ghost fluid methods for multi-medium Riemann problem

    NASA Astrophysics Data System (ADS)

    Xu, Liang; Liu, Tiegang

    2011-06-01

    Since the (original) ghost fluid method (OGFM) was proposed by Fedkiw et al. in 1999 [5], a series of other GFM-based methods such as the gas-water version GFM (GWGFM), the modified GFM (MGFM) and the real GFM (RGFM) have been developed subsequently. Systematic analysis, however, has yet to be carried out for the various GFMs on their accuracies and conservation errors. In this paper, we develop a technique to rigorously analyze the accuracies and conservation errors of these different GFMs when applied to the multi-medium Riemann problem with a general equation of state (EOS). By analyzing and comparing the interfacial state provided by each GFM to the exact one of the original multi-medium Riemann problem, we show that the accuracy of interfacial treatment can achieve "third-order accuracy" in the sense of comparing to the exact solution of the original mutli-medium Riemann problem for the MGFM and the RGFM, while it is of at most "first-order accuracy" for the OGFM and the GWGFM when the interface approach is actually near in balance. Similar conclusions are also obtained in association with the local conservation errors. A special test method is exploited to validate these theoretical conclusions from the numerical viewpoint.

  20. Assessing the Accuracy of Alaska National Hydrography Data for Mapping and Science

    NASA Astrophysics Data System (ADS)

    Arundel, S. T.; Yamamoto, K. H.; Mantey, K.; Vinyard-Houx, J.; Miller-Corbett, C. D.

    2012-12-01

    In July, 2011, the National Geospatial Program embarked on a large-scale Alaska Topographic Mapping Initiative. Maps will be published through the USGS US Topo program. Mapping of the state requires an understanding of the spatial quality of the National Hydrography Dataset (NHD), which is the hydrographic source for the US Topo. The NHD in Alaska was originally produced from topographic maps at 1:63,360 scale. It is critical to determine whether the NHD is accurate enough to be represented at the targeted map scale of the US Topo (1:25,000). Concerns are the spatial accuracy of data and the density of the stream network. Unsuitably low accuracy can be a result of the lower positional accuracy standards required for the original 1:63,360 scale mapping, temporal changes in water features, or any combination of these factors. Insufficient positional accuracy results in poor vertical integration with data layers of higher positional accuracy. Poor integration is readily apparent on the US Topo, particularly relative to current imagery and elevation data. In Alaska, current IFSAR-derived digital terrain models meet positional accuracy requirements for 1:24,000-scale mapping. Initial visual assessments indicate a wide range in the quality of fit between features in NHD and the IFSAR. However, no statistical analysis had been performed to quantify NHD feature accuracy. Determining the absolute accuracy is cost prohibitive, because of the need to collect independent, well-defined test points for such analysis; however, quantitative analysis of relative positional error is a feasible alternative. The purpose of this study is to determine the baseline accuracy of Alaska NHD pertinent to US Topo production, and to recommend reasonable guidelines and costs for NHD improvement and updates. A second goal is to detect error trends that might help identify areas or features where data improvements are most needed. There are four primary objectives of the study: 1. Choose study

  1. Integrating Landsat and California pesticide exposure estimation at aggregated analysis scales: Accuracy assessment of rurality

    NASA Astrophysics Data System (ADS)

    Vopham, Trang Minh

    Pesticide exposure estimation in epidemiologic studies can be constrained to analysis scales commonly available for cancer data - census tracts and ZIP codes. Research goals included (1) demonstrating the feasibility of modifying an existing geographic information system (GIS) pesticide exposure method using California Pesticide Use Reports (PURs) and land use surveys to incorporate Landsat remote sensing and to accommodate aggregated analysis scales, and (2) assessing the accuracy of two rurality metrics (quality of geographic area being rural), Rural-Urban Commuting Area (RUCA) codes and the U.S. Census Bureau urban-rural system, as surrogates for pesticide exposure when compared to the GIS gold standard. Segments, derived from 1985 Landsat NDVI images, were classified using a crop signature library (CSL) created from 1990 Landsat NDVI images via a sum of squared differences (SSD) measure. Organochlorine, organophosphate, and carbamate Kern County PUR applications (1974-1990) were matched to crop fields using a modified three-tier approach. Annual pesticide application rates (lb/ac), and sensitivity and specificity of each rurality metric were calculated. The CSL (75 land use classes) classified 19,752 segments [median SSD 0.06 NDVI]. Of the 148,671 PUR records included in the analysis, Landsat contributed 3,750 (2.5%) additional tier matches. ZIP Code Tabulation Area (ZCTA) rates ranged between 0 and 1.36 lb/ac and census tract rates between 0 and 1.57 lb/ac. Rurality was a mediocre pesticide exposure surrogate; higher rates were observed among urban areal units. ZCTA-level RUCA codes offered greater specificity (39.1-60%) and sensitivity (25-42.9%). The U.S. Census Bureau metric offered greater specificity (92.9-97.5%) at the census tract level; sensitivity was low (≤6%). The feasibility of incorporating Landsat into a modified three-tier GIS approach was demonstrated. Rurality accuracy is affected by rurality metric, areal aggregation, pesticide chemical

  2. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  3. Accuracy assessment of minimum control points for UAV photography and georeferencing

    NASA Astrophysics Data System (ADS)

    Skarlatos, D.; Procopiou, E.; Stavrou, G.; Gregoriou, M.

    2013-08-01

    In recent years, Autonomous Unmanned Aerial Vehicles (AUAV) became popular among researchers across disciplines because they combine many advantages. One major application is monitoring and mapping. Their ability to fly beyond eye sight autonomously, collecting data over large areas whenever, wherever, makes them excellent platform for monitoring hazardous areas or disasters. In both cases rapid mapping is needed while human access isn't always a given. Indeed, current automatic processing of aerial photos using photogrammetry and computer vision algorithms allows for rapid orthophomap production and Digital Surface Model (DSM) generation, as tools for monitoring and damage assessment. In such cases, control point measurement using GPS is either impossible, or time consuming or costly. This work investigates accuracies that can be attained using few or none control points over areas of one square kilometer, in two test sites; a typical block and a corridor survey. On board GPS data logged during AUAV's flight are being used for direct georeferencing, while ground check points are being used for evaluation. In addition various control point layouts are being tested using bundle adjustment for accuracy evaluation. Results indicate that it is possible to use on board single frequency GPS for direct georeferencing in cases of disaster management or areas without easy access, or even over featureless areas. Due to large numbers of tie points in the bundle adjustment, horizontal accuracy can be fulfilled with a rather small number of control points, but vertical accuracy may not.

  4. Thermal radiation view factor: Methods, accuracy and computer-aided procedures

    NASA Technical Reports Server (NTRS)

    Kadaba, P. V.

    1982-01-01

    The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.

  5. Estimating Orientation Using Magnetic and Inertial Sensors and Different Sensor Fusion Approaches: Accuracy Assessment in Manual and Locomotion Tasks

    PubMed Central

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  6. Estimating orientation using magnetic and inertial sensors and different sensor fusion approaches: accuracy assessment in manual and locomotion tasks.

    PubMed

    Bergamini, Elena; Ligorio, Gabriele; Summa, Aurora; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria

    2014-01-01

    Magnetic and inertial measurement units are an emerging technology to obtain 3D orientation of body segments in human movement analysis. In this respect, sensor fusion is used to limit the drift errors resulting from the gyroscope data integration by exploiting accelerometer and magnetic aiding sensors. The present study aims at investigating the effectiveness of sensor fusion methods under different experimental conditions. Manual and locomotion tasks, differing in time duration, measurement volume, presence/absence of static phases, and out-of-plane movements, were performed by six subjects, and recorded by one unit located on the forearm or the lower trunk, respectively. Two sensor fusion methods, representative of the stochastic (Extended Kalman Filter) and complementary (Non-linear observer) filtering, were selected, and their accuracy was assessed in terms of attitude (pitch and roll angles) and heading (yaw angle) errors using stereophotogrammetric data as a reference. The sensor fusion approaches provided significantly more accurate results than gyroscope data integration. Accuracy improved mostly for heading and when the movement exhibited stationary phases, evenly distributed 3D rotations, it occurred in a small volume, and its duration was greater than approximately 20 s. These results were independent from the specific sensor fusion method used. Practice guidelines for improving the outcome accuracy are provided. PMID:25302810

  7. Effect of Flexural Rigidity of Tool on Machining Accuracy during Microgrooving by Ultrasonic Vibration Cutting Method

    NASA Astrophysics Data System (ADS)

    Furusawa, Toshiaki

    2010-12-01

    It is necessary to form fine holes and grooves by machining in the manufacture of equipment in the medical or information field and the establishment of such a machining technology is required. In micromachining, the use of the ultrasonic vibration cutting method is expected and examined. In this study, I experimentally form microgrooves in stainless steel SUS304 by the ultrasonic vibration cutting method and examine the effects of the shape and material of the tool on the machining accuracy. As a result, the following are clarified. The evaluation of the machining accuracy of the straightness of the finished surface revealed that there is an optimal rake angle of the tools related to the increase in cutting resistance as a result of increases in work hardening and the cutting area. The straightness is improved by using a tool with low flexural rigidity. In particular, Young's modulus more significantly affects the cutting accuracy than the shape of the tool.

  8. Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images

    NASA Astrophysics Data System (ADS)

    Tahar, K. N.

    2015-08-01

    Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.

  9. Accuracy assessment of modeling architectural structures and details using terrestrial laser scanning

    NASA Astrophysics Data System (ADS)

    Kedzierski, M.; Walczykowski, P.; Orych, A.; Czarnecka, P.

    2015-08-01

    One of the most important aspects when performing architectural documentation of cultural heritage structures is the accuracy of both the data and the products which are generated from these data: documentation in the form of 3D models or vector drawings. The paper describes an assessment of the accuracy of modelling data acquired using a terrestrial phase scanner in relation to the density of a point cloud representing the surface of different types of construction materials typical for cultural heritage structures. This analysis includes the impact of the scanning geometry: the incidence angle of the laser beam and the scanning distance. For the purposes of this research, a test field consisting of samples of different types of construction materials (brick, wood, plastic, plaster, a ceramic tile, sheet metal) was built. The study involved conducting measurements at different angles and from a range of distances for chosen scanning densities. Data, acquired in the form of point clouds, were then filtered and modelled. An accuracy assessment of the 3D model was conducted by fitting it with the point cloud. The reflection intensity of each type of material was also analyzed, trying to determine which construction materials have the highest reflectance coefficients, and which have the lowest reflection coefficients, and in turn how this variable changes for different scanning parameters. Additionally measurements were taken of a fragment of a building in order to compare the results obtained in laboratory conditions, with those taken in field conditions.

  10. Strategies used in post-operative pain assessment and their clinical accuracy.

    PubMed

    Sjöström, B; Dahlgren, L O; Haljamäe, H

    2000-01-01

    Our knowledge about the content of strategies used by staff members in a surgical recovery unit for assessment of post-operative pain is fairly limited. The aim of the present study was to describe variations in the content of strategies used by nurses and physicians in practical clinical pain assessments and to evaluate the clinical accuracy of the strategies used. Critical care nurses (n = 30), physicians (n = 30) and postsurgical patients (n = 180) comprise the respondents. Applying a phenomenographical approach, interview data were tape-recorded during 180 clinical pain assessments. The pain assessments were related to comparative bedside pain ratings (Visual analogue Scale, VAS), both by staff members and post-operative patients. The recorded interviews were analysed to describe variations in ways of assessing pain. Pain assessment strategies were established by combining categories describing the impact of experience and categories of assessment criteria. The present observations, if included in the education of clinical staff members, could increase the understanding and thereby the quality of the pain assessment process. PMID:11022499

  11. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  12. Calibration of ground-based microwave radiometers - Accuracy assessment and recommendations for network users

    NASA Astrophysics Data System (ADS)

    Pospichal, Bernhard; Küchler, Nils; Löhnert, Ulrich; Crewell, Susanne; Czekala, Harald; Güldner, Jürgen

    2016-04-01

    Ground-based microwave radiometers (MWR) are becoming widely used in atmospheric remote sensing and start to be routinely operated by national weather services and other institutions. However, common standards for calibration of these radiometers and a detailed knowledge about the error characteristics is needed, in order to assimilate the data into models. Intercomparisons of calibrations by different MWRs have rarely been done. Therefore, two calibration experiments in Lindenberg (2014) and Meckenheim (2015) were performed in the frame of TOPROF (Cost action ES1303) in order to assess uncertainties and differences between various instruments. In addition, a series of experiments were taken in Oklahoma in autumn 2014. The focus lay on the performance of the two main instrument types, which are currently used operationally. These are the MP-Profiler series by Radiometrics Corporation as well as the HATPRO series by Radiometer Physics GmbH (RPG). Both instrument types are operating in two frequency bands, one along the 22 GHz water vapour line, the other one at the lower wing of the 60 GHz oxygen absorption complex. The goal was to establish protocols for providing quality controlled (QC) MWR data and their uncertainties. To this end, standardized calibration procedures for MWR were developed and recommendations for radiometer users were compiled. We focus here mainly on data types, integration times and optimal settings for calibration intervals, both for absolute (liquid nitrogen, tipping curve) as well as relative (hot load, noise diode) calibrations. Besides the recommendations for ground-based MWR operators, we will present methods to determine the accuracy of the calibration as well as means for automatic data quality control. In addition, some results from the intercomparison of different radiometers will be discussed.

  13. Accuracy of Panoramic Radiograph in Assessment of the Relationship Between Mandibular Canal and Impacted Third Molars

    PubMed Central

    Tantanapornkul, Weeraya; Mavin, Darika; Prapaiphittayakun, Jaruthai; Phipatboonyarat, Natnicha; Julphantong, Wanchanok

    2016-01-01

    Background: The relationship between impacted mandibular third molar and mandibular canal is important for removal of this tooth. Panoramic radiography is one of the commonly used diagnostic tools for evaluating the relationship of these two structures. Objectives: To evaluate the accuracy of panoramic radiographic findings in predicting direct contact between mandibular canal and impacted third molars on 3D digital images, and to define panoramic criterion in predicting direct contact between the two structures. Methods: Two observers examined panoramic radiographs of 178 patients (256 impacted mandibular third molars). Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root, diversion of mandibular canal and narrowing of third molar root were evaluated for 3D digital radiography. Direct contact between mandibular canal and impacted third molars on 3D digital images was then correlated with panoramic findings. Panoramic criterion was also defined in predicting direct contact between the two structures. Results: Panoramic findings of interruption of mandibular canal wall, isolated or with darkening of third molar root were statistically significantly correlated with direct contact between mandibular canal and impacted third molars on 3D digital images (p < 0.005), and were defined as panoramic criteria in predicting direct contact between the two structures. Conclusion: Interruption of mandibular canal wall, isolated or with darkening of third molar root observed on panoramic radiographs were effective in predicting direct contact between mandibular canal and impacted third molars on 3D digital images. Panoramic radiography is one of the efficient diagnostic tools for pre-operative assessment of impacted mandibular third molars. PMID:27398105

  14. An accuracy assessment of Cartesian-mesh approaches for the Euler equations

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1995-01-01

    A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.

  15. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  16. Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore

    USGS Publications Warehouse

    Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.

    2013-01-01

    The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.

  17. Nonuniform covering method as applied to multicriteria optimization problems with guaranteed accuracy

    NASA Astrophysics Data System (ADS)

    Evtushenko, Yu. G.; Posypkin, M. A.

    2013-02-01

    The nonuniform covering method is applied to multicriteria optimization problems. The ɛ-Pareto set is defined, and its properties are examined. An algorithm for constructing an ɛ-Pareto set with guaranteed accuracy ɛ is described. The efficiency of implementing this approach is discussed, and numerical results are presented.

  18. An analysis of the accuracy of magnetic resonance flip angle measurement methods

    NASA Astrophysics Data System (ADS)

    Morrell, Glen R.; Schabel, Matthias C.

    2010-10-01

    Several methods of flip angle mapping for magnetic resonance imaging have been proposed. We evaluated the accuracy of five methods of flip angle measurement in the presence of measurement noise. Our analysis was performed in a closed form by propagation of probability density functions (PDFs). The flip angle mapping methods compared were (1) the phase-sensitive method, (2) the dual-angle method using gradient recalled echoes (GRE), (3) an extended version of the GRE dual-angle method incorporating phase information, (4) the AFI method and (5) an extended version of the AFI method incorporating phase information. Our analysis took into account differences in required imaging time for these methods in the comparison of noise efficiency. PDFs of the flip angle estimate for each method for each value of true flip angle were calculated. These PDFs completely characterize the performance of each method. Mean bias and standard deviation were computed from these PDFs to more simply quantify the relative accuracy of each method over its range of measurable flip angles. We demonstrate that the phase-sensitive method provides the lowest mean bias and standard deviation of flip angle estimate of the five methods evaluated over a wide range of flip angles.

  19. Estimation of diagnostic test accuracy without full verification: a review of latent class methods

    PubMed Central

    Collins, John; Huynh, Minh

    2014-01-01

    The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172

  20. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions

    NASA Astrophysics Data System (ADS)

    Friedl, R.

    2016-04-01

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler's theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization like that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation.

  1. Enhancing the accuracy of the Fowler method for monitoring non-constant work functions.

    PubMed

    Friedl, R

    2016-04-01

    The Fowler method is a prominent non-invasive technique to determine the absolute work function of a surface based on the photoelectric effect. The evaluation procedure relies on the correlation of the photocurrent with the incident photon energy hν which is mainly dependent on the surface work function χ. Applying Fowler's theory of the photocurrent, the measurements can be fitted by the theoretical curve near the threshold hν⪆χ yielding the work function χ and a parameter A. The straightforward experimental implementation of the Fowler method is to use several particular photon energies, e.g. via interference filters. However, with a realization like that the restriction hν ≈ χ can easily be violated, especially when the work function of the material is decreasing during the measurements as, for instance, with coating or adsorption processes. This can lead to an overestimation of the evaluated work function value of typically some 0.1 eV, reaching up to more than 0.5 eV in an unfavorable case. A detailed analysis of the Fowler theory now reveals the background of that effect and shows that the fit-parameter A can be used to assess the accuracy of the determined value of χ conveniently during the measurements. Moreover, a scheme is introduced to quantify a potential overestimation and to perform a correction to χ to a certain extent. The issues are demonstrated exemplarily at the monitoring of the work function reduction of a stainless steel sample surface due to caesiation. PMID:27131682

  2. Qualitative methods for assessing risk

    SciTech Connect

    Mahn, J.A.; Hannaman, G.W.; Kryska, P.

    1995-03-01

    The purpose of this document is to describe a qualitative risk assessment process that supplements the requirements of DOE/AL 5481.1B. Although facility managers have a choice of assessing risk either quantitatively or qualitatively, trade offs are involved in making the most appropriate choice for a given application. The results that can be obtained from a quantitative risk assessment are significantly more robust than those results derived from a qualitative approach. However, the advantages derived from quantitative risk assessment are achieved at a greater expenditure of money, time and convenience. This document provides the elements of a framework for performing a much less costly qualitative risk assessment, while retaining the best attributes of quantitative methods. The approach discussed herein will; (1) provide facility managers with the tools to prepare consistent, site wide assessments, and (2) aid the reviewers who may be tasked to evaluate the assessments. Added cost/benefit measures of the qualitative methodology include the identification of mechanisms for optimally allocating resources for minimizing risk in an expeditious, and fiscally responsible manner.

  3. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  4. Diagnostic test accuracy: methods for systematic review and meta-analysis.

    PubMed

    Campbell, Jared M; Klugar, Miloslav; Ding, Sandrine; Carmody, Dennis P; Hakonsen, Sasja J; Jadotte, Yuri T; White, Sarahlouise; Munn, Zachary

    2015-09-01

    Systematic reviews are carried out to provide an answer to a clinical question based on all available evidence (published and unpublished), to critically appraise the quality of studies, and account for and explain variations between the results of studies. The Joanna Briggs Institute specializes in providing methodological guidance for the conduct of systematic reviews and has developed methods and guidance for reviewers conducting systematic reviews of studies of diagnostic test accuracy. Diagnostic tests are used to identify the presence or absence of a condition for the purpose of developing an appropriate treatment plan. Owing to demands for improvements in speed, cost, ease of performance, patient safety, and accuracy, new diagnostic tests are continuously developed, and there are often several tests available for the diagnosis of a particular condition. In order to provide the evidence necessary for clinicians and other healthcare professionals to make informed decisions regarding the optimum test to use, primary studies need to be carried out on the accuracy of diagnostic tests and the results of these studies synthesized through systematic review. The Joanna Briggs Institute and its international collaboration have updated, revised, and developed new guidance for systematic reviews, including systematic reviews of diagnostic test accuracy. This methodological article summarizes that guidance and provides detailed advice on the effective conduct of systematic reviews of diagnostic test accuracy. PMID:26355602

  5. Assessing the accuracy and repeatability of automated photogrammetrically generated digital surface models from unmanned aerial system imagery

    NASA Astrophysics Data System (ADS)

    Chavis, Christopher

    Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.

  6. Psychometric characteristics of simulation-based assessment in anaesthesia and accuracy of self-assessed scores.

    PubMed

    Weller, J M; Robinson, B J; Jolly, B; Watterson, L M; Joseph, M; Bajenov, S; Haughton, A J; Larsen, P D

    2005-03-01

    The purpose of this study was to define the psychometric properties of a simulation-based assessment of anaesthetists. Twenty-one anaesthetic trainees took part in three highly standardised simulations of anaesthetic emergencies. Scenarios were videotaped and rated independently by four judges. Trainees also assessed their own performance in the simulations. Results were analysed using generalisability theory to determine the influence of subject, case and judge on the variance in judges' scores and to determine the number of cases and judges required to produce a reliable result. Self-assessed scores were compared to the mean score of the judges. The results suggest that 12-15 cases are required to rank trainees reliably on their ability to manage simulated crises. Greater reliability is gained by increasing the number of cases than by increasing the number of judges. There was modest but significant correlation between self-assessed scores and external assessors' scores (rho = 0.321; p = 0.01). At the lower levels of performance, trainees consistently overrated their performance compared to those performing at higher levels (p = 0.0001). PMID:15710009

  7. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    SciTech Connect

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  8. Qualitative methods for assessing risk

    SciTech Connect

    Mahn, J.A.; Hannaman, G.W.; Kryska, P.

    1995-04-01

    The Department of Energy`s (DOE) non-nuclear facilities generally require only a qualitative accident analysis to assess facility risks in accordance with DOE Order 5481.1B, Safety Analysis and Review System. Achieving a meaningful qualitative assessment of risk necessarily requires the use of suitable non-numerical assessment criteria. Typically, the methods and criteria for assigning facility-specific accident scenarios to the qualitative severity and likelihood classification system in the DOE order requires significant judgment in many applications. Systematic methods for more consistently assigning the total accident scenario frequency and associated consequences are required to substantiate and enhance future risk ranking between various activities at Sandia National Laboratories (SNL). SNL`s Risk Management and National Environmental Policy Act (NEPA) Department has developed an improved methodology for performing qualitative risk assessments in accordance wi the DOE order requirements. Products of this effort are an improved set of qualitative description that permit (1) definition of the severity for both technical and programmatic consequences that may result from a variety of accident scenarios, and (2) qualitative representation of the likelihood of occurrence. These sets of descriptions are intended to facilitate proper application of DOE criteria for assessing facility risks.

  9. Accuracy and Reliability of Haptic Spasticity Assessment Using HESS (Haptic Elbow Spasticity Simulator)

    PubMed Central

    Kim, Jonghyun; Park, Hyung-Soon; Damiano, Diane L.

    2013-01-01

    Clinical assessment of spasticity tends to be subjective because of the nature of the in-person assessment; severity of spasticity is judged based on the muscle tone felt by a clinician during manual manipulation of a patient’s limb. As an attempt to standardize the clinical assessment of spasticity, we developed HESS (Haptic Elbow Spasticity Simulator), a programmable robotic system that can provide accurate and consistent haptic responses of spasticity and thus can be used as a training tool for clinicians. The aim of this study is to evaluate the accuracy and reliability of the recreated haptic responses. Based on clinical data collected from children with cerebral palsy, four levels of elbow spasticity (1, 1+, 2, and 3 in the Modified Ashworth Scale [MAS]) were recreated by HESS. Seven experienced clinicians manipulated HESS to score the recreated haptic responses. The accuracy of the recreation was assessed by the percent agreement between intended and determined MAS scores. The inter-rater reliability among the clinicians was analyzed by using Fleiss’s kappa. In addition, the level of realism with the recreation was evaluated by a questionnaire on “how realistic” this felt in a qualitative way. The percent agreement was high (85.7±11.7%), and for inter-rater reliability, there was substantial agreement (κ=0.646) among the seven clinicians. The level of realism was 7.71±0.95 out of 10. These results show that the haptic recreation of spasticity by HESS has the potential to be used as a training tool for standardizing and enhancing reliability of clinical assessment. PMID:22256328

  10. Application of a Monte Carlo accuracy assessment tool to TDRS and GPS

    NASA Technical Reports Server (NTRS)

    Pavloff, Michael S.

    1994-01-01

    In support of a NASA study on the application of radio interferometry to satellite orbit determination, MITRE developed a simulation tool for assessing interferometric tracking accuracy. Initially, the tool was applied to the problem of determining optimal interferometric station siting for orbit determination of the Tracking and Data Relay Satellite (TDRS). Subsequently, the Orbit Determination Accuracy Estimator (ODAE) was expanded to model the general batch maximum likelihood orbit determination algorithms of the Goddard Trajectory Determination System (GTDS) with measurement types including not only group and phase delay from radio interferometry, but also range, range rate, angular measurements, and satellite-to-satellite measurements. The user of ODAE specifies the statistical properties of error sources, including inherent observable imprecision, atmospheric delays, station location uncertainty, and measurement biases. Upon Monte Carlo simulation of the orbit determination process, ODAE calculates the statistical properties of the error in the satellite state vector and any other parameters for which a solution was obtained in the orbit determination. This paper presents results from ODAE application to two different problems: (1)determination of optimal geometry for interferometirc tracking of TDRS, and (2) expected orbit determination accuracy for Global Positioning System (GPS) tracking of low-earth orbit (LEO) satellites. Conclusions about optimal ground station locations for TDRS orbit determination by radio interferometry are presented, and the feasibility of GPS-based tracking for IRIDIUM, a LEO mobile satellite communications (MOBILSATCOM) system, is demonstrated.

  11. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  12. Accuracy and repeatability of two methods of gait analysis - GaitRite™ und Mobility Lab™ - in subjects with cerebellar ataxia.

    PubMed

    Schmitz-Hübsch, Tanja; Brandt, Alexander U; Pfueller, Caspar; Zange, Leonora; Seidel, Adrian; Kühn, Andrea A; Paul, Friedemann; Minnerop, Martina; Doss, Sarah

    2016-07-01

    Instrumental gait analysis is increasingly recognized as a useful tool for the evaluation of movement disorders. The various assessment devices available to date have mostly been evaluated in healthy populations only. We aimed to explore whether reliability and validity seen in healthy subjects can also be assumed in subjects with cerebellar ataxic gait. Gait was recorded simultaneously with two devices - a sensor-embedded walkway and an inertial sensor based system - to explore test accuracy in two groups of subjects: one with mild to moderate cerebellar ataxia due to a subtype of autosomal-dominantly inherited neurodegenerative disorder (SCA14), the other were healthy subjects matched for age and height (CTR). Test precision was assessed by retest within session for each device. In conclusion, accuracy and repeatability of gait measurements were not compromised by ataxic gait disorder. The accuracy of spatial measures was speed-dependent and a direct comparison of stride length from both devices will be most reliably made at comfortable speed. Measures of stride variability had low agreement between methods in CTR and at retest in both groups. However, the marked increase of stride variability in ataxia outweighs the observed amount of imprecision. PMID:27289221

  13. Symmetrically associated combination method for accuracy verification of Coordinate Measuring Machines

    NASA Astrophysics Data System (ADS)

    Wang, Hongtao; Chen, Xiaohuai; Fei, Yetai

    2008-10-01

    The paper is enlightened by the traditional combination verification method, provides a new detection way to CMM which is the combination of space one-dimension balls combination. It uses one-dimension balls instead of the standard ruler of the tradition combination verification method, adopts special researched detection equipment to CMM making one-dimension balls to three-dimension. The CMM processes the space random line with symmetry associating and comparing measurement method, and then uses the least-squares method to the measured data to obtain the measurement error of the space line, consequently realizes the measurement accuracy detection of the CMM.

  14. Testing the accuracy of the retrospective recall method used in expertise research.

    PubMed

    Howard, Robert W

    2011-12-01

    Expertise typically develops slowly over years, and controlled experiments to study its development may be impractical. Researchers often use a correlational, retrospective recall method in which participants recall career data, sometimes over many years before. However, recall accuracy is uncertain. The present study investigated the accuracy of recalled career data for up to 38 years, in over 600 international chess players. Participants' estimates of their entry year into international chess, total career games played, and number of games in a typical year were compared with the known true values. Entry year typically was recalled fairly accurately, and accuracy did not diminish systematically with time since list entry from 10 years earlier to 25 or more years earlier. On average, games-count estimates were reasonably accurate. However, some participants were very inaccurate, and some were more inaccurate in their total-games counts and entry-year estimates. The retrospective recall method yields usable data but may have some accuracy problems. Possible remedies are outlined. PMID:21671138

  15. Accuracy and stability in incompressible SPH (ISPH) based on the projection method and a new approach

    SciTech Connect

    Xu Rui Stansby, Peter; Laurence, Dominique

    2009-10-01

    The stability and accuracy of three methods which enforce either a divergence-free velocity field, density invariance, or their combination are tested here through the standard Taylor-Green and spin-down vortex problems. While various approaches to incompressible SPH (ISPH) have been proposed in the past decade, the present paper is restricted to the projection method for the pressure and velocity coupling. It is shown that the divergence-free ISPH method cannot maintain stability in certain situations although it is accurate before instability sets in. The density-invariant ISPH method is stable but inaccurate with random-noise like disturbances. The combined ISPH, combining advantages in divergence-free ISPH and density-invariant ISPH, can maintain accuracy and stability although at a higher computational cost. Redistribution of particles on a fixed uniform mesh is also shown to be effective but the attraction of a mesh-free method is lost. A new divergence-free ISPH approach is proposed here which maintains accuracy and stability while remaining mesh free without increasing computational cost by slightly shifting particles away from streamlines, although the necessary interpolation of hydrodynamic characteristics means the formulation ceases to be strictly conservative. This avoids the highly anisotropic particle spacing which eventually triggers instability. Importantly pressure fields are free from spurious oscillations, up to the highest Reynolds numbers tested.

  16. Accuracy Assessment Study of UNB3m Neutral Atmosphere Model for Global Tropospheric Delay Mitigation

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf

    2015-12-01

    Tropospheric delay is the second major source of error after the ionospheric delay for satellite navigation systems. The transmitted signal could face a delay caused by the troposphere of over 2m at zenith and 20m at lower satellite elevation angles of 10 degrees and below. Positioning errors of 10m or greater can result from the inaccurate mitigation of the tropospheric delay. Many techniques are available for tropospheric delay mitigation consisting of surface meteorological models and global empirical models. Surface meteorological models need surface meteorological data to give high accuracy mitigation while the global empirical models need not. Several hybrid neutral atmosphere delay models have been developed by (University of New Brunswick, Canada) UNB researchers over the past decade or so. The most widely applicable current version is UNB3m, which uses the Saastamoinen zenith delays, Niell mapping functions, and a look-up table with annual mean and amplitude for temperature, pressure, and water vapour pressure varying with respect to latitude and height. This paper presents an assessment study of the behaviour of the UNB3m model compared with highly accurate IGS-tropospheric estimation for three different (latitude/height) IGS stations. The study was performed over four nonconsecutive weeks on different seasons over one year (October 2014 to July 2015). It can be concluded that using UNB3m model gives tropospheric delay correction accuracy of 0.050m in average for low latitude regions in all seasons. The model's accuracy is about 0.075m for medium latitude regions, while its highest accuracy is about 0.014m for high latitude regions.

  17. Assessment of the Geodetic and Color Accuracy of Multi-Pass Airborne/Mobile Lidar Data

    NASA Astrophysics Data System (ADS)

    Pack, R. T.; Petersen, B.; Sunderland, D.; Blonquist, K.; Israelsen, P.; Crum, G.; Fowles, A.; Neale, C.

    2008-12-01

    The ability to merge lidar and color image data acquired by multiple passes of an aircraft or van is largely dependent on the accuracy of the navigation system that estimates the dynamic position and orientation of the sensor. We report an assessment of the performance of a Riegl Q560 lidar transceiver combined with a Litton LN-200 inertial measurement unit (IMU) based NovAtel SPAN GPS/IMU system and a Panasonic HD Video Camera system. Several techniques are reported that were used to maximize the performance of the GPS/IMU system in generating precisely merged point clouds. The airborne data used included eight flight lines all overflying the same building on the campus at Utah State University. These lines were flown at the FAA minimum altitude of 1000 feet for fixed-wing aircraft. The mobile data was then acquired with the same system mounted to look sideways out of a van several months later. The van was driven around the same building at variable speed in order to avoid pedestrians. An absolute accuracy of about 6 cm and a relative accuracy of less than 2.5 cm one-sigma are documented for the merged data. Several techniques are also reported for merging of the color video data stream with the lidar point cloud. A technique for back-projecting and burning lidar points within the video stream enables the verification of co-boresighting accuracy. The resulting pixel-level alignment is accurate with within the size of a lidar footprint. The techniques described in this paper enable the display of high-resolution colored points of high detail and color clarity.

  18. Shuttle radar topography mission accuracy assessment and evaluation for hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Mercuri, Pablo Alberto

    Digital Elevation Models (DEMs) are increasingly used even in low relief landscapes for multiple mapping applications and modeling approaches such as surface hydrology, flood risk mapping, agricultural suitability, and generation of topographic attributes. The National Aeronautics and Space Administration (NASA) has produced a nearly global database of highly accurate elevation data, the Shuttle Radar Topography Mission (SRTM) DEM. The main goals of this thesis were to investigate quality issues of SRTM, provide measures of vertical accuracy with emphasis on low relief areas, and to analyze the performance for the generation of physical boundaries and streams for watershed modeling and characterization. The absolute and relative accuracy of the two SRTM resolutions, at 1 and 3 arc-seconds, were investigated to generate information that can be used as a reference in areas with similar characteristics in other regions of the world. The absolute accuracy was obtained from accurate point estimates using the best available federal geodetic network in Indiana. The SRTM root mean square error for this area of the Midwest US surpassed data specifications. It was on the order of 2 meters for the 1 arc-second resolution in flat areas of the Midwest US. Estimates of error were smaller for the global coverage 3 arc-second data with very similar results obtained in the flat plains in Argentina. In addition to calculating the vertical accuracy, the impacts of physiography and terrain attributes, like slope, on the error magnitude were studied. The assessment also included analysis of the effects of land cover on vertical accuracy. Measures of local variability were described to identify the adjacency effects produced by surface features in the SRTM DEM, like forests and manmade features near the geodetic point. Spatial relationships among the bare-earth National Elevation Data and SRTM were also analyzed to assess the relative accuracy that was 2.33 meters in terms of the total

  19. Accuracy of the Generalized Self-Consistent Method in Modelling the Elastic Behaviour of Periodic Composites

    NASA Technical Reports Server (NTRS)

    Walker, Kevin P.; Freed, Alan D.; Jordan, Eric H.

    1993-01-01

    Local stress and strain fields in the unit cell of an infinite, two-dimensional, periodic fibrous lattice have been determined by an integral equation approach. The effect of the fibres is assimilated to an infinite two-dimensional array of fictitious body forces in the matrix constituent phase of the unit cell. By subtracting a volume averaged strain polarization term from the integral equation we effectively embed a finite number of unit cells in a homogenized medium in which the overall stress and strain correspond to the volume averaged stress and strain of the constrained unit cell. This paper demonstrates that the zeroth term in the governing integral equation expansion, which embeds one unit cell in the homogenized medium, corresponds to the generalized self-consistent approximation. By comparing the zeroth term approximation with higher order approximations to the integral equation summation, both the accuracy of the generalized self-consistent composite model and the rate of convergence of the integral summation can be assessed. Two example composites are studied. For a tungsten/copper elastic fibrous composite the generalized self-consistent model is shown to provide accurate, effective, elastic moduli and local field representations. The local elastic transverse stress field within the representative volume element of the generalized self-consistent method is shown to be in error by much larger amounts for a composite with periodically distributed voids, but homogenization leads to a cancelling of errors, and the effective transverse Young's modulus of the voided composite is shown to be in error by only 23% at a void volume fraction of 75%.

  20. Accuracy of teacher assessments of second-language students at risk for reading disability.

    PubMed

    Limbos, M M; Geva, E

    2001-01-01

    This study examined the accuracy of teacher assessments in screening for reading disabilities among students of English as a second language (ESL) and as a first language (L1). Academic and oral language tests were administered to 369 children (249 ESL, 120 L1) at the beginning of Grade 1 and at the end of Grade 2. Concurrently, 51 teachers nominated children at risk for reading failure and completed rating scales assessing academic and oral language skills. Scholastic records were reviewed for notation of concern or referral. The criterion measure was a standardized reading score based on phonological awareness, rapid naming, and word recognition. Results indicated that teacher rating scales and nominations had low sensitivity in identifying ESL and L1 students at risk for reading disability at the 1-year mark. Relative to other forms of screening, teacher-expressed concern had lower sensitivity. Finally, oral language proficiency contributed to misclassifications in the ESL group. PMID:15497265

  1. Five-year accuracy of assessments of high risk for sexual recidivism of adolescents.

    PubMed

    Hagan, Michael P; Anderson, Debra L; Caldwell, Melissa S; Kemper, Therese S

    2010-02-01

    This study looked at 12 juveniles in Wisconsin who were recommended by experts for commitment under Chapter 980, known as the Sexually Violent Person Commitments Act, but who ultimately were not committed. The purpose was to determine the accuracy of these assessments and risk for sexual reoffending for juvenile sexual offenders. The results found a rate of 42% sexual recidivism among these individuals, with a 5-year at-risk period. This figure is in contrast to the low rates of sexual recidivism reported in the general juvenile sexual research. This provides evidence that the capability to assess the risk in juvenile sexual re-offending may at times be higher than previously estimated. Implications of these unusual results are discussed. PMID:18957553

  2. D Modelling and Accuracy Assessment of Granite Quarry Using Unmmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    González-Aguilera, D.; Fernández-Hernández, J.; Mancera-Taboada, J.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; Felipe-García, B.; Gozalo-Sanz, I.; Arias-Perez, B.

    2012-07-01

    The unmanned aerial vehicles (UAVs) are automated systems whose main characteristic is that can be remotely piloted. This property is especially interesting in those civil engineering works in which the accuracy of the model is not reachable by common aerial or satellite systems, there is a difficult accessibility to the infrastructure due to location and geometry aspects, and the economic resources are limited. This paper aims to show the research, development and application of a UAV that will generate georeferenced spatial information at low cost, high quality, and high availability. In particular, a 3D modelling and accuracy assessment of granite quarry using UAV is applied. With regard to the image-based modelling pipeline, an automatic approach supported by open source tools is performed. The process encloses the well-known image-based modelling steps: calibration, extraction and matching of features; relative and absolute orientation of images and point cloud and surface generation. Beside this, an assessment of the final model accuracy is carried out by means of terrestrial laser scanner (TLS), imaging total station (ITS) and global navigation satellite system (GNSS) in order to ensure its validity. This step follows a twofold approach: (i) firstly, using singular check points to provide a dimensional control of the model and (ii) secondly, analyzing the level of agreement between the realitybased 3D model obtained from UAV and the generated with TLS. The main goal is to establish and validate an image-based modelling workflow using UAV technology which can be applied in the surveying and monitoring of different quarries.

  3. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    PubMed Central

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  4. Improvement Accuracy of Assessment of Total Equivalent Dose Rate during Air Travel

    NASA Astrophysics Data System (ADS)

    Dorenskiy, Sergey; Minligareev, Vladimir

    For radiation safety on the classic flight altitudes 8-11 km is necessary to develop a methodology for calculating the total equivalent dose rate (EDR) to prevent excess exposure of passengers and crews of airliners. During development it became necessary to assess all components affecting the calculation of EDR Comprehensive analysis of the solution to this problem, based on the developed program basis, allowing to automate calculations , as well as on the assessment of the statistical data is introduced. The results have shown that: 1) Limiting accuracy of error of geomagnetic cutoff rigidity (GCR) in the period from 2005 to 2010 was 5% This error is not significant within the considered problems. 2) It is necessary to take into account seasonal variations of atmospheric parameters in the calculation of the EDR. The difference in the determination of dose rate can reach 31% Diurnal variations of atmospheric parameters are offered to consider to improve reliability of EDR estimates. 3) Introduction in the GCR calculations of additional parameters is necessary for reliability improvement and estimation accuracy of EDR on flight routs (Kp index of geomagnetic activity , etc.).

  5. Accuracy assessment of 3D bone reconstructions using CT: an intro comparison.

    PubMed

    Lalone, Emily A; Willing, Ryan T; Shannon, Hannah L; King, Graham J W; Johnson, James A

    2015-08-01

    Computed tomography provides high contrast imaging of the joint anatomy and is used routinely to reconstruct 3D models of the osseous and cartilage geometry (CT arthrography) for use in the design of orthopedic implants, for computer assisted surgeries and computational dynamic and structural analysis. The objective of this study was to assess the accuracy of bone and cartilage surface model reconstructions by comparing reconstructed geometries with bone digitizations obtained using an optical tracking system. Bone surface digitizations obtained in this study determined the ground truth measure for the underlying geometry. We evaluated the use of a commercially available reconstruction technique using clinical CT scanning protocols using the elbow joint as an example of a surface with complex geometry. To assess the accuracies of the reconstructed models (8 fresh frozen cadaveric specimens) against the ground truth bony digitization-as defined by this study-proximity mapping was used to calculate residual error. The overall mean error was less than 0.4 mm in the cortical region and 0.3 mm in the subchondral region of the bone. Similarly creating 3D cartilage surface models from CT scans using air contrast had a mean error of less than 0.3 mm. Results from this study indicate that clinical CT scanning protocols and commonly used and commercially available reconstruction algorithms can create models which accurately represent the true geometry. PMID:26037323

  6. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    PubMed

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation. PMID:24936420

  7. About accuracy of the discrimination parameter estimation for the dual high-energy method

    NASA Astrophysics Data System (ADS)

    Osipov, S. P.; Chakhlov, S. V.; Osipov, O. S.; Shtein, A. M.; Strugovtsev, D. V.

    2015-04-01

    A set of the mathematical formulas to estimate the accuracy of discrimination parameters for two implementations of the dual high energy method - by the effective atomic number and by the level lines is given. The hardware parameters which influenced on the accuracy of the discrimination parameters are stated. The recommendations to form the structure of the high energy X-ray radiation impulses are formulated. To prove the applicability of the proposed procedure there were calculated the statistical errors of the discrimination parameters for the cargo inspection system of the Tomsk polytechnic university on base of the portable betatron MIB-9. The comparison of the experimental estimations and the theoretical ones of the discrimination parameter errors was carried out. It proved the practical applicability of the algorithm to estimate the discrimination parameter errors for the dual high energy method.

  8. An objective spinal motion imaging assessment (OSMIA): reliability, accuracy and exposure data

    PubMed Central

    Breen, Alan C; Muggleton, Jennifer M; Mellor, Fiona E

    2006-01-01

    Background Minimally-invasive measurement of continuous inter-vertebral motion in clinical settings is difficult to achieve. This paper describes the reliability, validity and radiation exposure levels in a new Objective Spinal Motion Imaging Assessment system (OSMIA) based on low-dose fluoroscopy and image processing. Methods Fluoroscopic sequences in coronal and sagittal planes were obtained from 2 calibration models using dry lumbar vertebrae, plus the lumbar spines of 30 asymptomatic volunteers. Calibration model 1 (mobile) was screened upright, in 7 inter-vertebral positions. The volunteers and calibration model 2 (fixed) were screened on a motorised table comprising 2 horizontal sections, one of which moved through 80 degrees. Model 2 was screened during motion 5 times and the L2-S1 levels of the volunteers twice. Images were digitised at 5fps. Inter-vertebral motion from model 1 was compared to its pre-settings to investigate accuracy. For volunteers and model 2, the first digitised image in each sequence was marked with templates. Vertebrae were tracked throughout the motion using automated frame-to-frame registration. For each frame, vertebral angles were subtracted giving inter-vertebral motion graphs. Volunteer data were acquired twice on the same day and analysed by two blinded observers. The root-mean-square (RMS) differences between paired data were used as the measure of reliability. Results RMS difference between reference and computed inter-vertebral angles in model 1 was 0.32 degrees for side-bending and 0.52 degrees for flexion-extension. For model 2, X-ray positioning contributed more to the variance of range measurement than did automated registration. For volunteer image sequences, RMS inter-observer variation in intervertebral motion range in the coronal plane was 1.86 degreesand intra-subject biological variation was between 2.75 degrees and 2.91 degrees. RMS inter-observer variation in the sagittal plane was 1.94 degrees. Radiation dosages

  9. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  10. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  11. Measurement methods and accuracy analysis of Chang'E-5 Panoramic Camera installation parameters

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Ren, Xin; Liu, Jianjun; Tan, Xu; Wang, Wenrui; Chen, Wangli; Zhang, Xiaoxia; Li, Chunlai

    2016-04-01

    Chang'E-5 (CE-5) is a lunar probe for the third phase of China Lunar Exploration Project (CLEP), whose main scientific objectives are to implement lunar surface sampling and to return the samples back to the Earth. To achieve these goals, investigation of lunar surface topography and geological structure within sampling area seems to be extremely important. The Panoramic Camera (PCAM) is one of the payloads mounted on CE-5 lander. It consists of two optical systems which installed on a camera rotating platform. Optical images of sampling area can be obtained by PCAM in the form of a two-dimensional image and a stereo images pair can be formed by left and right PCAM images. Then lunar terrain can be reconstructed based on photogrammetry. Installation parameters of PCAM with respect to CE-5 lander are critical for the calculation of exterior orientation elements (EO) of PCAM images, which is used for lunar terrain reconstruction. In this paper, types of PCAM installation parameters and coordinate systems involved are defined. Measurement methods combining camera images and optical coordinate observations are studied for this work. Then research contents such as observation program and specific solution methods of installation parameters are introduced. Parametric solution accuracy is analyzed according to observations obtained by PCAM scientifically validated experiment, which is used to test the authenticity of PCAM detection process, ground data processing methods, product quality and so on. Analysis results show that the accuracy of the installation parameters affects the positional accuracy of corresponding image points of PCAM stereo images within 1 pixel. So the measurement methods and parameter accuracy studied in this paper meet the needs of engineering and scientific applications. Keywords: Chang'E-5 Mission; Panoramic Camera; Installation Parameters; Total Station; Coordinate Conversion

  12. Assessment of the accuracy of ABC/2 variations in traumatic epidural hematoma volume estimation: a retrospective study

    PubMed Central

    Hu, Tingting; Zhang, Zhen

    2016-01-01

    Background. The traumatic epidural hematoma (tEDH) volume is often used to assist in tEDH treatment planning and outcome prediction. ABC/2 is a well-accepted volume estimation method that can be used for tEDH volume estimation. Previous studies have proposed different variations of ABC/2; however, it is unclear which variation will provide a higher accuracy. Given the promising clinical contribution of accurate tEDH volume estimations, we sought to assess the accuracy of several ABC/2 variations in tEDH volume estimation. Methods. The study group comprised 53 patients with tEDH who had undergone non-contrast head computed tomography scans. For each patient, the tEDH volume was automatically estimated by eight ABC/2 variations (four traditional and four newly derived) with an in-house program, and results were compared to those from manual planimetry. Linear regression, the closest value, percentage deviation, and Bland-Altman plot were adopted to comprehensively assess accuracy. Results. Among all ABC/2 variations assessed, the traditional variations y = 0.5 × A1B1C1 (or A2B2C1) and the newly derived variations y = 0.65 × A1B1C1 (or A2B2C1) achieved higher accuracy than the other variations. No significant differences were observed between the estimated volume values generated by these variations and those of planimetry (p > 0.05). Comparatively, the former performed better than the latter in general, with smaller mean percentage deviations (7.28 ± 5.90% and 6.42 ± 5.74% versus 19.12 ± 6.33% and 21.28 ± 6.80%, respectively) and more values closest to planimetry (18/53 and 18/53 versus 2/53 and 0/53, respectively). Besides, deviations of most cases in the former fell within the range of <10% (71.70% and 84.91%, respectively), whereas deviations of most cases in the latter were in the range of 10–20% and >20% (90.57% and 96.23, respectively). Discussion. In the current study, we adopted an automatic approach to assess the accuracy of several ABC/2 variations

  13. On the spectral accuracy of a fictitious domain method for elliptic operators in multi-dimensions

    NASA Astrophysics Data System (ADS)

    Le Penven, Lionel; Buffat, Marc

    2012-10-01

    This work is a continuation of the authors efforts to develop high-order numerical methods for solving elliptic problems with complex boundaries using a fictitious domain approach. In a previous paper, a new method was proposed, based on the use of smooth forcing functions with identical shapes, mutually disjoint supports inside the fictitious domain and whose amplitudes play the role of Lagrange multipliers in relation to a discrete set of boundary constraints. For one-dimensional elliptic problems, this method shows spectral accuracy but its implementation in two dimensions seems to be limited to a fourth-order algebraic convergence rate. In this paper, a spectrally accurate formulation is presented for multi-dimensional applications. Instead of being specified locally, the forcing function is defined as a convolution of a mollifier (smooth bump function) and a Lagrange multiplier function (the amplitude of the bump). The multiplier function is then approximated by Fourier series. Using a Fourier Galerkin approximation, the spectral accuracy is demonstrated on a two-dimensional Laplacian problem and on a Stokes flow around a periodic array of cylinders. In the latter, the numerical solution achieves the same high-order accuracy as a Stokes eigenfunction expansion and is much more accurate than the solution obtained with a classical third order finite element approximation using the same number of degrees of freedom.

  14. Error compensation method for improving the accuracy of biomodels obtained from CBCT data.

    PubMed

    Santolaria, J; Jiménez, R; Rada, M; Loscos, F

    2014-03-01

    This paper presents a method of improving the accuracy of the tridimensional reconstruction of human bone biomodels by means of tomography, with a view to finite element modelling or surgical planning, and the subsequent manufacturing using rapid prototyping technologies. It is focused on the analysis and correction of the results obtained by means of cone beam computed tomography (CBCT), which is used to digitalize non-superficial biological parts along with a gauge part with calibrated dimensions. A correction of both the threshold and the voxel size in the tomographic images and the final reconstruction is proposed. Finally, a comparison between a reconstruction of a gauge part using the proposed method and the reconstruction of that same gauge part using a standard method is shown. The increase in accuracy in the biomodel allows an improvement in medical applications based on image diagnosis, more accurate results in computational modelling, and improvements in surgical planning in situations in which the required accuracy directly affects the procedure's results. Thus, the subsequent constructed biomodel will be affected mainly by dimensional errors due to the additive manufacturing technology utilized, not because of the 3D reconstruction or the image acquisition technology. PMID:24080232

  15. The Accuracy of Diagnostic Methods for Diabetic Retinopathy: A Systematic Review and Meta-Analysis

    PubMed Central

    Martínez-Vizcaíno, Vicente; Cavero-Redondo, Iván; Álvarez-Bueno, Celia; Rodríguez-Artalejo, Fernando

    2016-01-01

    Objective The objective of this study was to evaluate the accuracy of the recommended glycemic measures for diagnosing diabetic retinopathy. Methods We systematically searched MEDLINE, EMBASE, the Cochrane Library, and the Web of Science databases from inception to July 2015 for observational studies comparing the diagnostic accuracy of glycated hemoglobin (HbA1c), fasting plasma glucose (FPG), and 2-hour plasma glucose (2h-PG). Random effects models for the diagnostic odds ratio (dOR) value computed by Moses’ constant for a linear model and 95% CIs were used to calculate the accuracy of the test. Hierarchical summary receiver operating characteristic curves (HSROC) were used to summarize the overall test performance. Results Eleven published studies were included in the meta-analysis. The pooled dOR values for the diagnosis of retinopathy were 16.32 (95% CI 13.86–19.22) for HbA1c and 4.87 (95% CI 4.39–5.40) for FPG. The area under the HSROC was 0.837 (95% CI 0.781–0.892) for HbA1c and 0.735 (95% CI 0.657–0.813) for FPG. The 95% confidence region for the point that summarizes the overall test performance of the included studies occurs where the cut-offs ranged from 6.1% (43.2 mmol/mol) to 7.8% (61.7 mmol/mol) for HbA1c and from 7.8 to 9.3 mmol/L for FPG. In the four studies that provided information regarding 2h-PG, the pooled accuracy estimates for HbA1c were similar to those of 2h-PG; the overall performance for HbA1c was superior to that for FPG. Conclusions The three recommended tests for the diagnosis of type 2 diabetes in nonpregnant adults showed sufficient accuracy for their use in clinical settings, although the overall accuracy for the diagnosis of retinopathy was similar for HbA1c and 2h-PG, which were both more accurate than for FPG. Due to the variability and inconveniences of the glucose level-based methods, HbA1c appears to be the most appropriate method for the diagnosis diabetic retinopathy. PMID:27123641

  16. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  17. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  18. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  19. Radiative Transfer Methods: new exact results for testing the accuracy of the ALI numerical method for a stellar atmosphere

    NASA Astrophysics Data System (ADS)

    Chevallier, L.

    2010-11-01

    Tests are presented of the 1D Accelerated Lambda Iteration method, which is widely used for solving the radiative transfer equation for a stellar atmosphere. We use our ARTY code as a reference solution and tables for these tests are provided. We model a static idealized stellar atmosphere, which is illuminated on its inner face and where internal sources are distributed with weak or strong gradients. This is an extension of published tests for a slab without incident radiation and gradients. Typical physical conditions for the continuum radiation and spectral lines are used, as well as typical values for the numerical parameters in order to reach a 1% accuracy. It is shown that the method is able to reach such an accuracy for most cases but the spatial discretization has to be refined for strong gradients and spectral lines, beyond the scope of realistic stellar atmospheres models. Discussion is provided on faster methods.

  20. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    PubMed Central

    Hwang, Andrew B; Franc, Benjamin L; Gullberg, Grant T; Hasegawa, Bruce H

    2009-01-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  1. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    SciTech Connect

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  2. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  3. A mesh-free method with arbitrary-order accuracy for acoustic wave propagation

    NASA Astrophysics Data System (ADS)

    Takekawa, Junichi; Mikada, Hitoshi; Imamura, Naoto

    2015-05-01

    In the present study, we applied a novel mesh-free method to solve acoustic wave equation. Although the conventional finite difference methods determine the coefficients of its operator based on the regular grid alignment, the mesh-free method is not restricted to regular arrangements of calculation points. We derive the mesh-free approach using the multivariable Taylor expansion. The methodology can use arbitrary-order accuracy scheme in space by expanding the influence domain which controls the number of neighboring calculation points. The unique point of the method is that the approach calculates the approximation of derivatives using the differences of spatial variables without parameters as e.g. the weighting functions, basis functions. Dispersion analysis using a plane wave reveals that the choice of the higher-order scheme improves the dispersion property of the method although the scheme for the irregular distribution of the calculation points is more dispersive than that of the regular alignment. In numerical experiments, a model of irregular distribution of the calculation points reproduces acoustic wave propagation in a homogeneous medium same as that of a regular lattice. In an inhomogeneous model which includes low velocity anomalies, partially fine arrangement improves the effectiveness of computational cost without suffering from accuracy reduction. Our result indicates that the method would provide accurate and efficient solutions for acoustic wave propagation using adaptive distribution of the calculation points.

  4. Complex shape product tolerance and accuracy control method for virtual assembly

    NASA Astrophysics Data System (ADS)

    Ma, Huiping; Jin, Yuanqiang; Zhang, Xiaoguang; Zhou, Hai

    2015-02-01

    The simulation of virtual assembly process for engineering design lacks of accuracy in the software of three-dimension CAD at present. Product modeling technology with tolerance, assembly precision preanalysis technique and precision control method are developed. To solve the problem of lack of precision information transmission in CAD, tolerance mathematical model of Small Displacement Torsor (SDT) is presented, which can bring about technology transfer and establishment of digital control function for geometric elements from the definition, description, specification to the actual inspection and evaluation process. Current tolerance optimization design methods for complex shape product are proposed for optimization of machining technology, effective cost control and assembly quality of the products.

  5. LNG Safety Assessment Evaluation Methods

    SciTech Connect

    Muna, Alice Baca; LaFleur, Angela Christine

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  6. Reconstruction Accuracy Assessment of Surface and Underwater 3D Motion Analysis: A New Approach

    PubMed Central

    de Jesus, Kelly; de Jesus, Karla; Figueiredo, Pedro; Vilas-Boas, João Paulo; Fernandes, Ricardo Jorge; Machado, Leandro José

    2015-01-01

    This study assessed accuracy of surface and underwater 3D reconstruction of a calibration volume with and without homography. A calibration volume (6000 × 2000 × 2500 mm) with 236 markers (64 above and 88 underwater control points—with 8 common points at water surface—and 92 validation points) was positioned on a 25 m swimming pool and recorded with two surface and four underwater cameras. Planar homography estimation for each calibration plane was computed to perform image rectification. Direct linear transformation algorithm for 3D reconstruction was applied, using 1600000 different combinations of 32 and 44 points out of the 64 and 88 control points for surface and underwater markers (resp.). Root Mean Square (RMS) error with homography of control and validations points was lower than without it for surface and underwater cameras (P ≤ 0.03). With homography, RMS errors of control and validation points were similar between surface and underwater cameras (P ≥ 0.47). Without homography, RMS error of control points was greater for underwater than surface cameras (P ≤ 0.04) and the opposite was observed for validation points (P ≤ 0.04). It is recommended that future studies using 3D reconstruction should include homography to improve swimming movement analysis accuracy. PMID:26175796

  7. SU-E-J-117: Verification Method for the Detection Accuracy of Automatic Winston Lutz Test

    SciTech Connect

    Tang, A; Chan, K; Fee, F; Chau, R

    2014-06-01

    Purpose: Winston Lutz test (WLT) has been a standard QA procedure performed prior to SRS treatment, to verify the mechanical iso-center setup accuracy upon different Gantry/Couch movements. Several detection algorithms exist,for analyzing the ball-radiation field alignment automatically. However, the accuracy of these algorithms have not been fully addressed. Here, we reveal the possible errors arise from each step in WLT, and verify the software detection accuracy with the Rectilinear Phantom Pointer (RLPP), a tool commonly used for aligning treatment plan coordinate with mechanical iso-center. Methods: WLT was performed with the radio-opaque ball mounted on a MIS and irradiated onto EDR2 films. The films were scanned and processed with an in-house Matlab program for automatic iso-center detection. Tests were also performed to identify the errors arise from setup, film development and scanning process. The radioopaque ball was then mounted onto the RLPP, and offset laterally and longitudinally in 7 known positions ( 0, ±0.2, ±0.5, ±0.8 mm) manually for irradiations. The gantry and couch was set to zero degree for all irradiation. The same scanned images were processed repeatedly to check the repeatability of the software. Results: Miminal discrepancies (mean=0.05mm) were detected with 2 films overlapped and irradiated but developed separately. This reveals the error arise from film processor and scanner alone. Maximum setup errors were found to be around 0.2mm, by analyzing data collected from 10 irradiations over 2 months. For the known shift introduced using the RLPP, the results agree with the manual offset, and fit linearly (R{sup 2}>0.99) when plotted relative to the first ball with zero shift. Conclusion: We systematically reveal the possible errors arise from each step in WLT, and introduce a simple method to verify the detection accuracy of our in-house software using a clinically available tool.

  8. Accuracy of actuarial procedures for assessment of sexual offender recidivism risk may vary across ethnicity.

    PubMed

    Långström, Niklas

    2004-04-01

    Little is known about whether the accuracy of tools for assessment of sexual offender recidivism risk holds across ethnic minority offenders. I investigated the predictive validity across ethnicity for the RRASOR and the Static-99 actuarial risk assessment procedures in a national cohort of all adult male sex offenders released from prison in Sweden 1993-1997. Subjects ordered out of Sweden upon release from prison were excluded and remaining subjects (N = 1303) divided into three subgroups based on citizenship. Eighty-three percent of the subjects were of Nordic ethnicity, and non-Nordic citizens were either of non-Nordic European (n = 49, hereafter called European) or African Asian descent (n = 128). The two tools were equally accurate among Nordic and European sexual offenders for the prediction of any sexual and any violent nonsexual recidivism. In contrast, neither measure could differentiate African Asian sexual or violent recidivists from nonrecidivists. Compared to European offenders, AfricanAsian offenders had more often sexually victimized a nonrelative or stranger, had higher Static-99 scores, were younger, more often single, and more often homeless. The results require replication, but suggest that the promising predictive validity seen with some risk assessment tools may not generalize across offender ethnicity or migration status. More speculatively, different risk factors or causal chains might be involved in the development or persistence of offending among minority or immigrant sexual abusers. PMID:15208896

  9. Accuracy assessment of the GPS-based slant total electron content

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio; Azpilicueta, Francisco Javier

    2009-08-01

    The main scope of this research is to assess the ultimate accuracy that can be achieved for the slant total electron content (sTEC) estimated from dual-frequency global positioning system (GPS) observations which depends, primarily, on the calibration of the inter-frequency biases (IFB). Two different calibration approaches are analyzed: the so-called satellite-by-satellite one, which involves levelling the carrier-phase to the code-delay GPS observations and then the IFB estimation; and the so-called arc-by-arc one, which avoids the use of code-delay observations but requires the estimation of arc-dependent biases. Two strategies are used for the analysis: the first one compares calibrated sTEC from two co-located GPS receivers that serve to assess the levelling errors; and the second one, assesses the model error using synthetic data free of calibration error, produced with a specially developed technique. The results show that the arc-by-arc calibration technique performs better than the satellite-by-satellite one for mid-latitudes, while the opposite happens for low-latitudes.

  10. Angular spectrum method with compact space-bandwidth: generalization and full-field accuracy.

    PubMed

    Kozacki, Tomasz; Falaggis, Konstantinos

    2016-07-01

    A recent Letter [Opt. Lett.40, 3420 (2015)OPLEDP0146-959210.1364/OL.40.003420] reported a modified angular spectrum method that uses a sampling scheme based on a compact space-bandwidth product representation. That technique is useful for focusing and defocusing propagation cases and is generalized here for the case of propagation between two defocus planes. The proposed method employs paraxial spherical phase factors and modified propagation kernels to reduce the size of the numerical space-bandwidth product needed for wave field calculations. A Wigner distribution analysis is carried out in order to ensure high accuracy of the calculations in the entire computational domain. This is achieved by analyzing the evolution of the generalized space-bandwidth product when passing through the propagation algorithm for various space-frequency constraints. The results allow the derivations of sampling criteria, and, despite this, also show that a small amount of space/frequency zero padding significantly extends the capability of the recently reported modified angular spectrum method. Simulations validate the high accuracy of that method and verify a computational and memory gain of more than two orders of magnitude when comparing this technique with the conventional angular spectrum method. PMID:27409185

  11. Accuracy assessment of land cover/land use classifiers in dry and humid areas of Iran.

    PubMed

    Yousefi, Saleh; Khatami, Reza; Mountrakis, Giorgos; Mirzaee, Somayeh; Pourghasemi, Hamid Reza; Tazeh, Mehdi

    2015-10-01

    Land cover/land use (LCLU) maps are essential inputs for environmental analysis. Remote sensing provides an opportunity to construct LCLU maps of large geographic areas in a timely fashion. Knowing the most accurate classification method to produce LCLU maps based on site characteristics is necessary for the environment managers. The aim of this research is to examine the performance of various classification algorithms for LCLU mapping in dry and humid climates (from June to August). Testing is performed in three case studies from each of the two climates in Iran. The reference dataset of each image was randomly selected from the entire images and was randomly divided into training and validation set. Training sets included 400 pixels, and validation sets included 200 pixels of each LCLU. Results indicate that the support vector machine (SVM) and neural network methods can achieve higher overall accuracy (86.7 and 86.6%) than other examined algorithms, with a slight advantage for the SVM. Dry areas exhibit higher classification difficulty as man-made features often have overlapping spectral responses to soil. A further observation is that spatial segregation and lower mixture of LCLU classes can increase classification overall accuracy. PMID:26403704

  12. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    NASA Astrophysics Data System (ADS)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  13. Assessing the accuracy of the Second Military Survey for the Doren Landslide (Vorarlberg, Austria)

    NASA Astrophysics Data System (ADS)

    Zámolyi, András.; Székely, Balázs; Biszak, Sándor

    2010-05-01

    Reconstruction of the early and long-term evolution of landslide areas is especially important for determining the proportion of anthropogenic influence on the evolution of the region affected by mass movements. The recent geologic and geomorphological setting of the prominent Doren landslide in Vorarlberg (Western Austria) has been studied extensively by various research groups and civil engineering companies. Civil aerial imaging of the area dates back to the 1950's. Modern monitoring techniques include aerial imaging as well as airborne and terrestrial laser scanning (LiDAR) providing us with almost yearly assessment of the changing geomorphology of the area. However, initiation of the landslide occurred most probably earlier than the application of these methods, since there is evidence that the landslide was already active in the 1930's. For studying the initial phase of landslide formation one possibility is to get back on information recorded on historic photographs or historic maps. In this case study we integrated topographic information from the map sheets of the Second Military Survey of the Habsburg Empire that was conducted in Vorarlberg during the years 1816-1821 (Kretschmer et al., 2004) into a comprehensive GIS. The region of interest around the Doren landslide was georeferenced using the method of Timár et al. (2006) refined by Molnár (2009) thus providing a geodetically correct positioning and the possibility of matching the topographic features from the historic map with features recognized in the LiDAR DTM. The landslide of Doren is clearly visible in the historic map. Additionally, prominent geomorphologic features such as morphological scarps, rills and gullies, mass movement lobes and the course of the Weißach rivulet can be matched. Not only the shape and character of these elements can be recognized and matched, but also the positional accuracy is adequate for geomorphological studies. Since the settlement structure is very stable in the

  14. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  15. A method for the probabilistic design assessment of composite structures

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1994-01-01

    A formal procedure for the probabilistic design assessment of a composite structure is described. The uncertainties in all aspects of a composite structure (constituent material properties, fabrication variables, structural geometry, service environments, etc.), which result in the uncertain behavior in the composite structural responses, are included in the assessment. The probabilistic assessment consists of design criteria, modeling of composite structures and uncertainties, simulation methods, and the decision making process. A sample case is presented to illustrate the formal procedure and to demonstrate that composite structural designs can be probabilistically assessed with accuracy and efficiency.

  16. Accuracy comparison of spatial interpolation methods for estimation of air temperatures in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Shim, K.; Jung, M.; Kim, S.

    2013-12-01

    Because of complex terrain, micro- as well as meso-climate variability is extreme by locations in Korea. In particular, air temperature of agricultural fields are influenced by topographic features of the surroundings making accurate interpolation of regional meteorological data from point-measured data. This study was conducted to compare accuracy of a spatial interpolation method to estimate air temperature in Korean Peninsula with the rugged terrains in South Korea. Four spatial interpolation methods including Inverse Distance Weighting (IDW), Spline, Kriging and Cokriging were tested to estimate monthly air temperature of unobserved stations. Monthly measured data sets (minimum and maximum air temperature) from 456 automatic weather station (AWS) locations in South Korea were used to generate the gridded air temperature surface. Result of cross validation showed that using Exponential theoretical model produced a lower root mean square error (RMSE) than using Gaussian theoretical model in case of Kriging and Cokriging and Spline produced the lowest RMSE of spatial interpolation methods in both maximum and minimum air temperature estimation. In conclusion, Spline showed the best accuracy among the methods, but further experiments which reflect topography effects such as temperature lapse rate are necessary to improve the prediction.

  17. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark. PMID:26964185

  18. Concatenation and Species Tree Methods Exhibit Statistically Indistinguishable Accuracy under a Range of Simulated Conditions

    PubMed Central

    Tonini, João; Moore, Andrew; Stern, David; Shcheglovitova, Maryia; Ortí, Guillermo

    2015-01-01

    Phylogeneticists have long understood that several biological processes can cause a gene tree to disagree with its species tree. In recent years, molecular phylogeneticists have increasingly foregone traditional supermatrix approaches in favor of species tree methods that account for one such source of error, incomplete lineage sorting (ILS). While gene tree-species tree discordance no doubt poses a significant challenge to phylogenetic inference with molecular data, researchers have only recently begun to systematically evaluate the relative accuracy of traditional and ILS-sensitive methods. Here, we report on simulations demonstrating that concatenation can perform as well or better than methods that attempt to account for sources of error introduced by ILS. Based on these and similar results from other researchers, we argue that concatenation remains a useful component of the phylogeneticist’s toolbox and highlight that phylogeneticists should continue to make explicit comparisons of results produced by contemporaneous and classical methods. PMID:25901289

  19. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  20. Comparative adaptation accuracy of acrylic denture bases evaluated by two different methods.

    PubMed

    Lee, Chung-Jae; Bok, Sung-Bem; Bae, Ji-Young; Lee, Hae-Hyoung

    2010-08-01

    This study examined the adaptation accuracy of acrylic denture base processed using fluid-resin (PERform), injection-moldings (SR-Ivocap, Success, Mak Press), and two compression-molding techniques. The adaptation accuracy was measured primarily by the posterior border gaps at the mid-palatal area using a microscope and subsequently by weighing of the weight of the impression material between the denture base and master cast using hand-mixed and automixed silicone. The correlation between the data measured using these two test methods was examined. The PERform and Mak Press produced significantly smaller maximum palatal gap dimensions than the other groups (p<0.05). Mak Press also showed a significantly smaller weight of automixed silicone material than the other groups (p<0.05), while SR-Ivocap and Success showed similar adaptation accuracy to the compression-molding denture. The correlationship between the magnitude of the posterior border gap and the weight of the silicone impression materials was affected by either the material or mixing variables. PMID:20675954

  1. Optical system error analysis and calibration method of high-accuracy star trackers.

    PubMed

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  2. Accuracy of Colposcopically Guided Diagnostic Methods for the Detection of Cervical Intraepithelial Neoplasia

    PubMed Central

    Müller, K.; Soergel, P.; Hillemanns, P.; Jentschke, M.

    2016-01-01

    Introduction: Many factors can affect the accuracy of colposcopically guided biopsy, endocervical curettage (ECC) and differential cytology, all of which are standard, minimally invasive procedures used to detect cervical intraepithelial neoplasia. Method: All conizations carried out between 2007 and 2013 in the gynecological department of Hannover Medical School were retrospectively reviewed. The agreement between colposcopic diagnosis and histology was evaluated retrospectively. The analysis included 593 complete datasets out of a total of 717 cases treated. Results: The overall agreement was 85.5 %; the accuracy was significantly higher (p = 0.029) when three biopsy specimens were taken rather than just one. The agreement between diagnosis and histological findings from conization was highest for women < 30 years (90.7 %) and lowest for women > 50 years (72.1 %; p = 0.008). The agreement between preoperative differential cytology and histology results after conization was 86.7 % and improved as patient age increased (p = 0.035). The agreement between ECC findings and the results of conization was only 49.1 % irrespective of patient age, transformation zone or the patientʼs menopausal status. Conclusion: The accuracy of colposcopically guided biopsy appears to increase when three biopsy specimens are taken and is particularly high for younger patients. Differential cytology was also found to be highly accurate and is particularly useful for patients aged more than 50 years. The accuracy of ECC was significantly lower; however ECC can provide important additional information in selected cases. PMID:26941452

  3. Screening Accuracy for Risk of Autism Spectrum Disorder Using the Brief Infant-Toddler Social and Emotional Assessment (BITSEA)

    ERIC Educational Resources Information Center

    Gardner, Lauren M.; Murphy, Laura; Campbell, Jonathan M.; Tylavsky, Frances; Palmer, Frederick B.; Graff, J. Carolyn

    2013-01-01

    Early identification of autism spectrum disorders (ASDs) is facilitated by the use of standardized screening scales that assess the social emotional behaviors associated with ASD. Authors examined accuracy of Brief Infant-Toddler Social and Emotional Assessment (BITSEA) subscales in detecting Modified Checklist for Autism in Toddlers (M-CHAT) risk…

  4. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  5. Assessing the GPS-based sTEC accuracy by using experimental and synthetic dataset

    NASA Astrophysics Data System (ADS)

    Brunini, Claudio

    The main scope of this contribution is to assess the accuracy that can be achieved in the slant total electron content (sTEC) estimated from dual-frequency GPS observations, which depends, primarily, on the calibration of the inter-frequency biases (IFB). Two different calibration approaches are analysed: the so-called satellite-by-satellite, which involves the reduction of the carrier-phase ambiguities effects by levelling the carrier-phase to the code-delay GPS observations and then the estimation of satellite-dependent IFB; and the so-called arc-by-arc, which avoid the use of code-delay observations but requires the estimation of arc-dependent IFB. In principle, the first approach should produce more reliable results because it requires the estimation of les parameters than the second one, but the second approach presents the benefit of being not affected by the levelling error effects that are caused by the presence of the code-delay multi-path. This contribution discusses two different experiments specifically designed to asses the GPS- based sTEC accuracy: the so-called co-location and synthetic data experiments. The first one is based on the comparison of the calibrated sTEC estimated from the data collected by two nearby GPS receivers, while the second one is based on the use of a synthetic dataset free of calibration errors generated with an empirical ionospheric model. While the co-location experiment is sensitive to the levelling but not to the model error effects, the synthetic data experiment provides a way to assess the calibration biases errors caused by the inconsistencies of the ionospheric model involved in the estimation process. Both experiments used in a complementary way allowed the estimation of calibration errors of several TECu (total electron content unities) depending on the station location (low, mid or high latitude); the ionospheric conditions (solar and geomagnetic activity, season); characteristics of the GPS instruments (receivers

  6. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods.

    PubMed

    Ogilvie, Huw A; Heled, Joseph; Xie, Dong; Drummond, Alexei J

    2016-05-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  7. Computational Performance and Statistical Accuracy of *BEAST and Comparisons with Other Methods

    PubMed Central

    Ogilvie, Huw A.; Heled, Joseph; Xie, Dong; Drummond, Alexei J.

    2016-01-01

    Under the multispecies coalescent model of molecular evolution, gene trees have independent evolutionary histories within a shared species tree. In comparison, supermatrix concatenation methods assume that gene trees share a single common genealogical history, thereby equating gene coalescence with species divergence. The multispecies coalescent is supported by previous studies which found that its predicted distributions fit empirical data, and that concatenation is not a consistent estimator of the species tree. *BEAST, a fully Bayesian implementation of the multispecies coalescent, is popular but computationally intensive, so the increasing size of phylogenetic data sets is both a computational challenge and an opportunity for better systematics. Using simulation studies, we characterize the scaling behavior of *BEAST, and enable quantitative prediction of the impact increasing the number of loci has on both computational performance and statistical accuracy. Follow-up simulations over a wide range of parameters show that the statistical performance of *BEAST relative to concatenation improves both as branch length is reduced and as the number of loci is increased. Finally, using simulations based on estimated parameters from two phylogenomic data sets, we compare the performance of a range of species tree and concatenation methods to show that using *BEAST with tens of loci can be preferable to using concatenation with thousands of loci. Our results provide insight into the practicalities of Bayesian species tree estimation, the number of loci required to obtain a given level of accuracy and the situations in which supermatrix or summary methods will be outperformed by the fully Bayesian multispecies coalescent. PMID:26821913

  8. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  9. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Bell, James F., III; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-04-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ~ 3 wt.%. The statistical significance of these improvements was ~ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically fabricated

  10. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    NASA Astrophysics Data System (ADS)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  11. Diagnostic accuracy of refractometry for assessing bovine colostrum quality: A systematic review and meta-analysis.

    PubMed

    Buczinski, S; Vandeweerd, J M

    2016-09-01

    Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. PMID:27423958

  12. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  13. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  14. Accuracy Assessment of Three-dimensional Surface Reconstructions of In vivo Teeth from Cone-beam Computed Tomography

    PubMed Central

    Sang, Yan-Hui; Hu, Hong-Cheng; Lu, Song-He; Wu, Yu-Wei; Li, Wei-Ran; Tang, Zhi-Hui

    2016-01-01

    Background: The accuracy of three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) has been particularly important in dentistry, which will affect the effectiveness of diagnosis, treatment plan, and outcome in clinical practice. The aims of this study were to assess the linear, volumetric, and geometric accuracy of 3D reconstructions from CBCT and to investigate the influence of voxel size and CBCT system on the reconstructions results. Methods: Fifty teeth from 18 orthodontic patients were assigned to three groups as NewTom VG 0.15 mm group (NewTom VG; voxel size: 0.15 mm; n = 17), NewTom VG 0.30 mm group (NewTom VG; voxel size: 0.30 mm; n = 16), and VATECH DCTPRO 0.30 mm group (VATECH DCTPRO; voxel size: 0.30 mm; n = 17). The 3D reconstruction models of the teeth were segmented from CBCT data manually using Mimics 18.0 (Materialise Dental, Leuven, Belgium), and the extracted teeth were scanned by 3Shape optical scanner (3Shape A/S, Denmark). Linear and volumetric deviations were separately assessed by comparing the length and volume of the 3D reconstruction model with physical measurement by paired t-test. Geometric deviations were assessed by the root mean square value of the imposed 3D reconstruction and optical models by one-sample t-test. To assess the influence of voxel size and CBCT system on 3D reconstruction, analysis of variance (ANOVA) was used (α = 0.05). Results: The linear, volumetric, and geometric deviations were −0.03 ± 0.48 mm, −5.4 ± 2.8%, and 0.117 ± 0.018 mm for NewTom VG 0.15 mm group; −0.45 ± 0.42 mm, −4.5 ± 3.4%, and 0.116 ± 0.014 mm for NewTom VG 0.30 mm group; and −0.93 ± 0.40 mm, −4.8 ± 5.1%, and 0.194 ± 0.117 mm for VATECH DCTPRO 0.30 mm group, respectively. There were statistically significant differences between groups in terms of linear measurement (P < 0.001), but no significant difference in terms of volumetric measurement (P = 0.774). No statistically significant difference were

  15. Brief inhalation method to measure cerebral oxygen extraction fraction with PET: Accuracy determination under pathologic conditions

    SciTech Connect

    Altman, D.I.; Lich, L.L.; Powers, W.J. )

    1991-09-01

    The initial validation of the brief inhalation method to measure cerebral oxygen extraction fraction (OEF) with positron emission tomography (PET) was performed in non-human primates with predominantly normal cerebral oxygen metabolism (CMRO2). Sensitivity analysis by computer simulation, however, indicated that this method may be subject to increasing error as CMRO2 decreases. Accuracy of the method under pathologic conditions of reduced CMRO2 has not been determined. Since reduced CMRO2 values are observed frequently in newborn infants and in regions of ischemia and infarction in adults, we determined the accuracy of the brief inhalation method in non-human primates by comparing OEF measured with PET to OEF measured by arteriovenous oxygen difference (A-VO2) under pathologic conditions of reduced CMRO2 (0.27-2.68 ml 100g-1 min-1). A regression equation of OEF (PET) = 1.07 {times} OEF (A-VO2) + 0.017 (r = 0.99, n = 12) was obtained. The absolute error in oxygen extraction measured with PET was small (mean 0.03 {plus minus} 0.04, range -0.03 to 0.12) and was independent of cerebral blood flow, cerebral blood volume, CMRO2, or OEF. The percent error was higher (19 {plus minus} 37), particularly when OEF is below 0.15. These data indicate that the brief inhalation method can be used for measurement of cerebral oxygen extraction and cerebral oxygen metabolism under pathologic conditions of reduced cerebral oxygen metabolism, with these limitations borne in mind.

  16. Methods of geodiversity assessment and theirs application

    NASA Astrophysics Data System (ADS)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2016-04-01

    analysis is not as simple as in the case of direct methods. Indirect methods offer the possibility to assessment and mapping of geodiversity of large and not easy accessible research areas. The foregoing examples mainly tend to refer to areas at the local or regional scale. These analyses are although possible for the implementation of large spatial units, such as territories of the country or state (Zwoliński 2007, Benito-Calvo et al. 2009, Pereira et al. 2013, 2015). A fundamental difference lies in the appropriate, corresponding to the spatial scale and the specification of the study areas, selection of the assessing criterion, and above all, the input geodata. In the geodiversity assessments, access to the data in the adequate resolution and accuracy is especially important. Acquisition and integration of the geodata often requires considerable financial and temporal outlay and not infrequently could be a serious limitation to perform some analyzes. The proposal of geomorphometry based landform geodiversity indirect assessment method and single, in addition, easy to obtain source data - digital elevation model, might create new opportunities for its broad implementation across numerous disciplines. The research on the assessment of geodiversity must be regarded to be at an initial stage at present. While the conception of geodiversity itself has a reliable theoretical foundation, no universal method of its assessment has been developed yet. It is only the adoption of a generally accepted and clear methodology of geodiversity evaluation that will make it possible to implement it widely in many fields of science, administration and the management of geospace. Then geodiversity can become as important an indicator as biodiversity is today.

  17. Increasing the range accuracy of three-dimensional ghost imaging ladar using optimum slicing number method

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Xu, Lu; Yang, Cheng-Hua; Wang, Qiang; Liu, Yue-Hao; Zhao, Yuan

    2015-12-01

    The range accuracy of three-dimensional (3D) ghost imaging is derived. Based on the derived range accuracy equation, the relationship between the slicing number and the range accuracy is analyzed and an optimum slicing number (OSN) is determined. According to the OSN, an improved 3D ghost imaging algorithm is proposed to increase the range accuracy. Experimental results indicate that the slicing number can affect the range accuracy significantly and the highest range accuracy can be achieved if the 3D ghost imaging system works with OSN. Project supported by the Young Scientist Fund of the National Natural Science Foundation of China (Grant No. 61108072).

  18. Accuracy of DXA in estimating body composition changes in elite athletes using a four compartment model as the reference method

    PubMed Central

    2010-01-01

    Background Dual-energy x-ray absorptiometry (DXA) provides an affordable and practical assessment of multiple whole body and regional body composition. However, little information is available on the assessment of changes in body composition in top-level athletes using DXA. The present study aimed to assess the accuracy of DXA in tracking body composition changes (relative fat mass [%FM], absolute fat mass [FM], and fat-free mass [FFM]) of elite male judo athletes from a period of weight stability to prior to a competition, compared to a four compartment model (4C model), as the criterion method. Methods A total of 27 elite male judo athletes (age, 22.2 ± 2.8 yrs) athletes were evaluated. Measures of body volume by air displacement plethysmography, bone mineral content assessed by DXA, and total-body water assessed by deuterium dilution were used in a 4C model. Statistical analyses included examination of the coefficient of determinant (r2), standard error of estimation (SEE), slope, intercept, and agreement between models. Results At a group level analysis, changes in %FM, FM, and FFM estimates by DXA were not significantly different from those by the 4C model. Though the regression between DXA and the 4C model did not differ from the line of identity DXA %FM, FM, and FFM changes only explained 29%, 36%, and 38% of the 4C reference values, respectively. Individual results showed that the 95% limits of agreement were -3.7 to 5.3 for %FM, -2.6 to 3.7 for FM, and -3.7 to 2.7 for FFM. The relation between the difference and the mean of the methods indicated a significant trend for %FM and FM changes with DXA overestimating at the lower ends and underestimating at the upper ends of FM changes. Conclusions Our data indicate that both at group and individual levels DXA did not present an expected accuracy in tracking changes in adiposity in elite male judo athletes. PMID:20307312

  19. Assessing the impact of measurement frequency on accuracy and uncertainty of water quality data

    NASA Astrophysics Data System (ADS)

    Helm, Björn; Schiffner, Stefanie; Krebs, Peter

    2014-05-01

    Physico-chemical water quality is a major objective for the evaluation of the ecological state of a river water body. Physical and chemical water properties are measured to assess the river state, identify prevalent pressures and develop mitigating measures. Regularly water quality is assessed based on weekly to quarterly grab samples. The increasing availability of online-sensor data measured at a high frequency allows for an enhanced understanding of emission and transport dynamics, as well as the identification of typical and critical states. In this study we present a systematic approach to assess the impact of measurement frequency on the accuracy and uncertainty of derived aggregate indicators of environmental quality. High frequency measured (10 min-1 and 15 min-1) data on water temperature, pH, turbidity, electric conductivity and concentrations of dissolved oxygen nitrate, ammonia and phosphate are assessed in resampling experiments. The data is collected at 14 sites in eastern and northern Germany representing catchments between 40 km2 and 140 000 km2 of varying properties. Resampling is performed to create series of hourly to quarterly frequency, including special restrictions like sampling at working hours or discharge compensation. Statistical properties and their confidence intervals are determined in a bootstrapping procedure and evaluated along a gradient of sampling frequency. For all variables the range of the aggregate indicators increases largely in the bootstrapping realizations with decreasing sampling frequency. Mean values of electric conductivity, pH and water temperature obtained with monthly frequency differ in average less than five percent from the original data. Mean dissolved oxygen, nitrate and phosphate had in most stations less than 15 % bias. Ammonia and turbidity are most sensitive to the increase of sampling frequency with up to 30 % in average and 250 % maximum bias at monthly sampling frequency. A systematic bias is recognized

  20. Prostate Localization on Daily Cone-Beam Computed Tomography Images: Accuracy Assessment of Similarity Metrics

    SciTech Connect

    Kim, Jinkoo; Hammoud, Rabih; Pradhan, Deepak; Zhong Hualiang; Jin, Ryan Y.; Movsas, Benjamin; Chetty, Indrin J.

    2010-07-15

    Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCT pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.

  1. Accuracy of task recall for epidemiological exposure assessment to construction noise

    PubMed Central

    Reeb-Whitaker, C; Seixas, N; Sheppard, L; Neitzel, R

    2004-01-01

    Aims: To validate the accuracy of construction worker recall of task and environment based information; and to evaluate the effect of task recall on estimates of noise exposure. Methods: A cohort of 25 construction workers recorded tasks daily and had dosimetry measurements weekly for six weeks. Worker recall of tasks reported on the daily activity cards was validated with research observations and compared directly to task recall at a six month interview. Results: The mean LEQ noise exposure level (dBA) from dosimeter measurements was 89.9 (n = 61) and 83.3 (n = 47) for carpenters and electricians, respectively. The percentage time at tasks reported during the interview was compared to that calculated from daily activity cards; only 2/22 tasks were different at the nominal 5% significance level. The accuracy, based on bias and precision, of percentage time reported for tasks from the interview was 53–100% (median 91%). For carpenters, the difference in noise estimates derived from activity cards (mean 91.9 dBA) was not different from those derived from the questionnaire (mean 91.7 dBA). This trend held for electricians as well. For all subjects, noise estimates derived from the activity card and the questionnaire were strongly correlated with dosimetry measurements. The average difference between the noise estimate derived from the questionnaire and dosimetry measurements was 2.0 dBA, and was independent of the actual exposure level. Conclusions: Six months after tasks were performed, construction workers were able to accurately recall the percentage time they spent at various tasks. Estimates of noise exposure based on long term recall (questionnaire) were no different from estimates derived from daily activity cards and were strongly correlated with dosimetry measurements, overestimating the level on average by 2.0 dBA. PMID:14739379

  2. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  3. Accuracy of Cameriere's cut-off value for third molar in assessing 18 years of age.

    PubMed

    De Luca, S; Biagi, R; Begnoni, G; Farronato, G; Cingolani, M; Merelli, V; Ferrante, L; Cameriere, R

    2014-02-01

    Due to increasingly numerous international migrations, estimating the age of unaccompanied minors is becoming of enormous significance for forensic professionals who are required to deliver expert opinions. The third molar tooth is one of the few anatomical sites available for estimating the age of individuals in late adolescence. This study verifies the accuracy of Cameriere's cut-off value of the third molar index (I3M) in assessing 18 years of age. For this purpose, a sample of orthopantomographs (OPTs) of 397 living subjects aged between 13 and 22 years (192 female and 205 male) was analyzed. Age distribution gradually decreases as I3M increases in both males and females. The results show that the sensitivity of the test was 86.6%, with a 95% confidence interval of (80.8%, 91.1%), and its specificity was 95.7%, with a 95% confidence interval of (92.1%, 98%). The proportion of correctly classified individuals was 91.4%. Estimated post-test probability, p was 95.6%, with a 95% confidence interval of (92%, 98%). Hence, the probability that a subject positive on the test (i.e., I3M<0.08) was 18 years of age or older was 95.6%. PMID:24365729

  4. Accuracies of southwell and force/stiffness methods in the prediction of buckling strength of hypersonic aircraft wing tubular panels

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    1987-01-01

    Accuracies of the Southwell method and the force/stiffness (F/S) method are examined when the methods were used in the prediction of buckling loads of hypersonic aircraft wing tubular panels, based on nondestructive buckling test data. Various factors affecting the accuracies of the two methods were discussed. Effects of load cutoff point in the nondestructive buckling tests on the accuracies of the two methods were discussed in great detail. For the tubular panels under pure compression, the F/S method was found to give more accurate buckling load predictions than the Southwell method, which excessively overpredicts the buckling load. It was found that the Southwell method required a higher load cutoff point, as compared with the F/S method. In using the F/S method for predicting the buckling load of tubular panels under pure compression, the load cutoff point of approximately 50 percent of the critical load could give reasonably accurate predictions.

  5. Influence of calibration method and material on the accuracy of stress distribution measurement systems.

    PubMed

    Engel, Karsten; Hartmann, Ulrich; Potthast, Wolfgang; Brüggemann, Gert-Peter

    2016-06-01

    Biomechanical analyses of the stress distribution and the force transfer in the human knee are essential to better understand the aetiology of joint diseases. Accuracy studies of commonly used capacitive or resistive-based stress distribution measurement systems have led to severe problems caused by an inaccurate experimental setup. For instance, in one study, overestimations of the measured forces in the sensor's centre were reported. Therefore, the primary aim of this study was to investigate the ability of capacitive and resistive-based sensors to measure forces in a homogenous pressure environment and the secondary goal was to analyse the influence of different calibration materials on the measurement accuracy. A Novel pressure vessel and metal indenters covered with different rubber materials were used in combination with a material testing machine to load the sensors. Four different linearly increasing nominal forces (925-3670 N) were applied and the deviations between the nominal and the measured forces were calculated. The capacitive measurement system showed errors between 1% and 7% in the homogenous pressure environment, whereas the errors of the resistive system were found to vary between 4% and 17%. The influence of the calibration material was observed to be greater for the resistive sensors (1-179%) than for the capacitive sensors (0.5-25%). In conclusion, it can be stated that - for the pressure measurement systems compared in this article - the capacitive one is less sensitive to the calibration method and the calibration material than the resistive system. PMID:26146092

  6. Accuracy considerations for Chebyshev rational approximation method (CRAM) in Burnup calculations

    SciTech Connect

    Pusa, M.

    2013-07-01

    The burnup equations can in principle be solved by computing the exponential of the burnup matrix. However, due to the difficult numerical characteristics of burnup matrices, the problem is extremely stiff and the matrix exponential solution has previously been considered infeasible for an entire burnup system containing over a thousand nuclides. It was recently discovered by the author that the eigenvalues of burnup matrices are generally located near the negative real axis, which prompted introducing the Chebyshev rational approximation method (CRAM) for solving the burnup equations. CRAM can be characterized as the best rational approximation on the negative real axis and it has been shown to be capable of simultaneously solving an entire burnup system both accurately and efficiently. In this paper, the accuracy of CRAM is further studied in the context of burnup equations. The approximation error is analyzed based on the eigenvalue decomposition of the burnup matrix. It is deduced that the relative accuracy of CRAM may be compromised if a nuclide concentration diminishes significantly during the considered time step. Numerical results are presented for two test cases, the first one representing a small burnup system with 36 nuclides and the second one a full a decay system with 1531 nuclides. (authors)

  7. On the convergence and accuracy of the FDTD method for nanoplasmonics.

    PubMed

    Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora

    2015-04-20

    Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large

  8. QuickBird and OrbView-3 Geopositional Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Helder, Dennis; Ross, Kenton

    2006-01-01

    Objective: Compare vendor-provided image coordinates with known references visible in the imagery. Approach: Use multiple, well-characterized sites with >40 ground control points (GCPs); sites that are a) Well distributed; b) Accurately surveyed; and c) Easily found in imagery. Perform independent assessments with independent teams. Each team has slightly different measurement techniques and data processing methods. NASA Stennis Space Center. South Dakota State University.

  9. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  10. Accuracy assessment of noninvasive hematocrit measurement based on partial least squares and NIR reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Songbiao; Soller, Babs R.; Perras, Kristen; Khan, Tania; Favreau, Janice

    1999-07-01

    Hematocrit (Hct) is one of the most important parameters to monitor when the patient has large blood loss or blood dilution. The current standard method for measuring hematocrit is off-line and invasive. An accurate, continuous, and noninvasive method of measuring hematocrit is highly desired for physicians to response rapidly in life-threatening situations. A set of instrumental characterization experiments was performed to assess the effects of spectrometer drift and probe placement on patient's forearm. Several factors were investigated in order to minimize the patient-dependent offset encountered in a previous study.

  11. Assessing the Accuracy of Sentinel-3 SLSTR Sea-Surface Temperature Retrievals Using High Accuracy Infrared Radiiometers on Ships of Opportunity

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Izaguirre, M. A.; Szcszodrak, M.; Williams, E.; Reynolds, R. M.

    2015-12-01

    The assessment of errors and uncertainties in satellite-derived SSTs can be achieved by comparisons with independent measurements of skin SST of high accuracy. Such validation measurements are provided by well-calibrated infrared radiometers mounted on ships. The second generation of Marine-Atmospheric Emitted Radiance Interferometers (M-AERIs) have recently been developed and two are now deployed on cruise ships of Royal Caribbean Cruise Lines that operate in the Caribbean Sea, North Atlantic and Mediterranean Sea. In addition, two Infrared SST Autonomous Radiometers (ISARs) are mounted alternately on a vehicle transporter of NYK Lines that crosses the Pacific Ocean between Japan and the USA. Both M-AERIs and ISARs are self-calibrating radiometers having two internal blackbody cavities to provide at-sea calibration of the measured radiances, and the accuracy of the internal calibration is periodically determined by measurements of a NIST-traceable blackbody cavity in the laboratory. This provides SI-traceability for the at-sea measurements. It is anticipated that these sensors will be deployed during the next several years and will be available for the validation of the SLSTRs on Sentinel-3a and -3b.

  12. In vitro assessment of the accuracy of extraoral periapical radiography in root length determination

    PubMed Central

    Nazeer, Muhammad Rizwan; Khan, Farhan Raza; Rahman, Munawwar

    2016-01-01

    Objective: To determine the accuracy of extra oral periapical radiography in obtaining root length by comparing it with the radiographs obtained from standard intraoral approach and extended distance intraoral approach. Materials and Methods: It was an in vitro, comparative study conducted at the dental clinics of Aga Khan University Hospital. ERC exemption was obtained for this work, ref number 3407Sur-ERC-14. We included premolars and molars of a standard phantom head mounted with metal and radiopaque teeth. Radiation was exposed using three radiographic approaches: Standard intraoral, extended length intraoral and extraoral. Since, the unit of analysis was individual root, thus, we had a total of 24 images. The images were stored in VixWin software. The length of the roots was determined using the scale function of the measuring tool inbuilt in the software. Data were analyzed using SPSS version 19.0 and GraphPad software. Pearson correlation coefficient and Bland–Altman test was applied to determine whether the tooth length readings obtained from three different approaches were correlated. P = 0.05 was taken as statistically significant. Results: The correlation between standard intraoral and extended intraoral was 0.97; the correlation between standard intraoral and extraoral method was 0.82 while the correlation between extended intraoral and extraoral was 0.76. The results of Bland–Altman test showed that the average discrepancy between these methods is not large enough to be considered as significant. Conclusions: It appears that the extraoral radiographic method can be used in root length determination in subjects where intraoral radiography is not possible. PMID:27011737

  13. [Accuracy of the oscillometric method to measure blood pressure in children

    PubMed

    Rego Filho, E A; Mello, S F; Silva, C R; Vituri, D W; Bazoni, E; Gordan, L N

    1999-01-01

    OBJECTIVE: The aim of this study is to analyze the substitution of the standard auscultatory method by the oscillometric blood pressure monitor, independently of the validity of the intraarterial blood pressure measurement. The accuracy of the automatic oscillometric monitor was compared to the auscultatory mercury manometer blood pressure measurement in apparently healthy school age children. METHODS: A device able to perform 3 simultaneous readings are used: one reading by the monitor and the others by two "blind" observers. We studied 72 school age children with the following characteristics: mean age 9.5 (6.1-16.1) and 39 males (54.2%). RESULTS: The difference for the systolic and diastolic blood pressure obtained by the monitor was in average + 6.2 mmHg and + 10.0 mmHg, respectively, when compared to the observer's readings. There was neither a good correlation nor a good agreement between the two observers and the monitor in the blood pressure determination. CONCLUSIONS: We concluded that the substitution of the standard auscultatory method for the non-invasive oscillometric method to measure blood pressure in school age children can not be generally recommended. PMID:14685547

  14. An experimental study of the accuracy in measurement of modulation transfer function using an edge method

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Hoon; Kim, Ye-seul; Park, Hye-Suk; Lee, Young-Jin; Kim, Hee-Joung

    2015-03-01

    Image evaluation is necessary in digital radiography (DR) which is widely used in medical imaging. Among parameters of image evaluation, modulation transfer function (MTF) is the important factor in the field of medical imaging and necessary to obtain detective quantum efficiency (DQE) which represents overall performance of the detector signal-to-noise ratio. However, the accurate measurement of MTF is still not easy because of geometric effect, electric noise, quantum noise, and truncation error. Therefore, in order to improve accuracy of MTF, four experimental methods were tested in this study such as changing the tube current, applying smoothing method in edge spread function (ESF), adjusting line spread function (LSF) range, and changing tube angle. Our results showed that MTF's fluctuation was decreased by high tube current and smoothing method. However, tube current should not exceed detector saturation and smoothing in ESF causes a distortion in ESF and MTF. In addition, decreasing LSF range diminished fluctuation and the number of sampling in MTF and high tube angle generates degradation in MTF. Based on these results, excessively low tube current and the smoothing method should be avoided. Also, optimal range of LSF considering reduction of fluctuation and the number of sampling in MTF was necessary and precise tube angle is essential to obtain an accurate MTF. In conclusion, our results demonstrated that accurate MTF can be acquired.

  15. [Research on Accuracy and Stability of Inversing Vegetation Chlorophyll Content by Spectral Index Method].

    PubMed

    Jiang, Hai-ling; Yang, Hang; Chen, Xiao-ping; Wang, Shu-dong; Li, Xue-ke; Liu, Kai; Cen, Yi

    2015-04-01

    Spectral index method was widely applied to the inversion of crop chlorophyll content. In the present study, PSR3500 spectrometer and SPAD-502 chlorophyll fluorometer were used to acquire the spectrum and relative chlorophyll content (SPAD value) of winter wheat leaves on May 2nd 2013 when it was at the jointing stage of winter wheat. Then the measured spectra were resampled to simulate TM multispectral data and Hyperion hyperspectral data respectively, using the Gaussian spectral response function. We chose four typical spectral indices including normalized difference vegetation index (NDVD, triangle vegetation index (TVI), the ratio of modified transformed chlorophyll absorption ratio index (MCARI) to optimized soil adjusted vegetation index (OSAVI) (MCARI/OSAVI) and vegetation index based on universal pattern decomposition (VIUPD), which were constructed with the feature bands sensitive to the vegetation chlorophyll. After calculating these spectral indices based on the resampling TM and Hyperion data, the regression equation between spectral indices and chlorophyll content was established. For TM, the result indicates that VIUPD has the best correlation with chlorophyll (R2 = 0.819 7) followed by NDVI (R2 = 0.791 8), while MCARI/OSAVI and TVI also show a good correlation with R2 higher than 0.5. For the simulated Hyperion data, VIUPD again ranks first with R2 = 0.817 1, followed by MCARI/OSAVI (R2 = 0.658 6), while NDVI and TVI show very low values with R2 less than 0.2. It was demonstrated that VIUPD has the best accuracy and stability to estimate chlorophyll of winter wheat whether using simulated TM data or Hyperion data, which reaffirms that VIUPD is comparatively sensor independent. The chlorophyll estimation accuracy and stability of MCARI/OSAVI also works well, partly because OSAVI could reduce the influence of backgrounds. Two broadband spectral indices NDVI and TVI are weak for the chlorophyll estimation of simulated Hyperion data mainly because of

  16. Accuracy Assessment of Mobile Mapping Point Clouds Using the Existing Environment as Terrestrial Reference

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Brenner, C.

    2016-06-01

    Mobile mapping data is widely used in various applications, what makes it especially important for data users to get a statistically verified quality statement on the geometric accuracy of the acquired point clouds or its processed products. The accuracy of point clouds can be divided into an absolute and a relative quality, where the absolute quality describes the position of the point cloud in a world coordinate system such as WGS84 or UTM, whereas the relative accuracy describes the accuracy within the point cloud itself. Furthermore, the quality of processed products such as segmented features depends on the global accuracy of the point cloud but mainly on the quality of the processing steps. Several data sources with different characteristics and quality can be thought of as potential reference data, such as cadastral maps, orthophoto, artificial control objects or terrestrial surveys using a total station. In this work a test field in a selected residential area was acquired as reference data in a terrestrial survey using a total station. In order to reach high accuracy the stationing of the total station was based on a newly made geodetic network with a local accuracy of less than 3 mm. The global position of the network was determined using a long time GNSS survey reaching an accuracy of 8 mm. Based on this geodetic network a 3D test field with facades and street profiles was measured with a total station, each point with a two-dimensional position and altitude. In addition, the surface of poles of street lights, traffic signs and trees was acquired using the scanning mode of the total station. Comparing this reference data to the acquired mobile mapping point clouds of several measurement campaigns a detailed quality statement on the accuracy of the point cloud data is made. Additionally, the advantages and disadvantages of the described reference data source concerning availability, cost, accuracy and applicability are discussed.

  17. Designing a Multi-Objective Multi-Support Accuracy Assessment of the 2001 National Land Cover Data (NLCD 2001) of the Conterminous United States

    EPA Science Inventory

    The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. ...

  18. Assessing posttraumatic stress in military service members: improving efficiency and accuracy.

    PubMed

    Fissette, Caitlin L; Snyder, Douglas K; Balderrama-Durbin, Christina; Balsis, Steve; Cigrang, Jeffrey; Talcott, G Wayne; Tatum, JoLyn; Baker, Monty; Cassidy, Daniel; Sonnek, Scott; Heyman, Richard E; Smith Slep, Amy M

    2014-03-01

    Posttraumatic stress disorder (PTSD) is assessed across many different populations and assessment contexts. However, measures of PTSD symptomatology often are not tailored to meet the needs and demands of these different populations and settings. In order to develop population- and context-specific measures of PTSD it is useful first to examine the item-level functioning of existing assessment methods. One such assessment measure is the 17-item PTSD Checklist-Military version (PCL-M; Weathers, Litz, Herman, Huska, & Keane, 1993). Although the PCL-M is widely used in both military and veteran health-care settings, it is limited by interpretations based on aggregate scores that ignore variability in item endorsement rates and relatedness to PTSD. Based on item response theory, this study conducted 2-parameter logistic analyses of the PCL-M in a sample of 196 service members returning from a yearlong, high-risk deployment to Iraq. Results confirmed substantial variability across items both in terms of their relatedness to PTSD and their likelihood of endorsement at any given level of PTSD. The test information curve for the full 17-item PCL-M peaked sharply at a value of θ = 0.71, reflecting greatest information at approximately the 76th percentile level of underlying PTSD symptom levels in this sample. Implications of findings are discussed as they relate to identifying more efficient, accurate subsets of items tailored to military service members as well as other specific populations and evaluation contexts. PMID:24015857

  19. Accuracy of methods for calculating volumetric wear from coordinate measuring machine data of retrieved metal-on-metal hip joint implants.

    PubMed

    Lu, Zhen; McKellop, Harry A

    2014-03-01

    This study compared the accuracy and sensitivity of several numerical methods employing spherical or plane triangles for calculating the volumetric wear of retrieved metal-on-metal hip joint implants from coordinate measuring machine measurements. Five methods, one using spherical triangles and four using plane triangles to represent the bearing and the best-fit surfaces, were assessed and compared on a perfect hemisphere model and a hemi-ellipsoid model (i.e. unworn models), computer-generated wear models and wear-tested femoral balls, with point spacings of 0.5, 1, 2 and 3 mm. The results showed that the algorithm (Method 1) employing spherical triangles to represent the bearing surface and to scale the mesh to the best-fit surfaces produced adequate accuracy for the wear volume with point spacings of 0.5, 1, 2 and 3 mm. The algorithms (Methods 2-4) using plane triangles to represent the bearing surface and to scale the mesh to the best-fit surface also produced accuracies that were comparable to that with spherical triangles. In contrast, if the bearing surface was represented with a mesh of plane triangles and the best-fit surface was taken as a smooth surface without discretization (Method 5), the algorithm produced much lower accuracy with a point spacing of 0.5 mm than Methods 1-4 with a point spacing of 3 mm. PMID:24531891

  20. Accuracy of Cameriere's third molar maturity index in assessing legal adulthood on Serbian population.

    PubMed

    Zelic, Ksenija; Galic, Ivan; Nedeljkovic, Nenad; Jakovljevic, Aleksandar; Milosevic, Olga; Djuric, Marija; Cameriere, Roberto

    2016-02-01

    At the moment, a large number of asylum seekers from the Middle East are passing through Serbia. Most of them do not have identification documents. Also, the past wars in the Balkan region have left many unidentified victims and missing persons. From a legal point of view, it is crucial to determine whether a person is a minor or an adult (≥18 years of age). In recent years, methods based on the third molar development have been used for this purpose. The present article aims to verify the third molar maturity index (I3M) based on the correlation between the chronological age and normalized measures of the open apices and height of the third mandibular molar. The sample consisted of 598 panoramic radiographs (290 males and 299 females) from 13 to 24 years of age. The cut-off value of I3M=0.08 was used to discriminate adults and minors. The results demonstrated high sensitivity (0.96, 0.86) and specificity (0.94, 0.98) in males and females, respectively. The proportion of correctly classified individuals was 0.95 in males and 0.91 in females. In conclusion, the suggested value of I3M=0.08 can be used on Serbian population with high accuracy. PMID:26773223

  1. Accuracy assessment of building point clouds automatically generated from iphone images

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R.

    2014-06-01

    Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (σ) of roughness histograms are calculated as (μ1 = 0.44 m., σ1 = 0.071 m.) and (μ2 = 0.025 m., σ2 = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.

  2. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  3. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  4. A TECHNIQUE FOR ASSESSING THE ACCURACY OF SUB-PIXEL IMPERVIOUS SURFACE ESTIMATES DERIVED FROM LANDSAT TM IMAGERY

    EPA Science Inventory

    We developed a technique for assessing the accuracy of sub-pixel derived estimates of impervious surface extracted from LANDSAT TM imagery. We utilized spatially coincident
    sub-pixel derived impervious surface estimates, high-resolution planimetric GIS data, vector--to-
    r...

  5. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  6. Diagnostic Accuracy of Computer-Aided Assessment of Intranodal Vascularity in Distinguishing Different Causes of Cervical Lymphadenopathy.

    PubMed

    Ying, Michael; Cheng, Sammy C H; Ahuja, Anil T

    2016-08-01

    Ultrasound is useful in assessing cervical lymphadenopathy. Advancement of computer science technology allows accurate and reliable assessment of medical images. The aim of the study described here was to evaluate the diagnostic accuracy of computer-aided assessment of the intranodal vascularity index (VI) in differentiating the various common causes of cervical lymphadenopathy. Power Doppler sonograms of 347 patients (155 with metastasis, 23 with lymphoma, 44 with tuberculous lymphadenitis, 125 reactive) with palpable cervical lymph nodes were reviewed. Ultrasound images of cervical nodes were evaluated, and the intranodal VI was quantified using a customized computer program. The diagnostic accuracy of using the intranodal VI to distinguish different disease groups was evaluated and compared. Metastatic and lymphomatous lymph nodes tend to be more vascular than tuberculous and reactive lymph nodes. The intranodal VI had the highest diagnostic accuracy in distinguishing metastatic and tuberculous nodes with a sensitivity of 80%, specificity of 73%, positive predictive value of 91%, negative predictive value of 51% and overall accuracy of 68% when a cutoff VI of 22% was used. Computer-aided assessment provides an objective and quantitative way to evaluate intranodal vascularity. The intranodal VI is a useful parameter in distinguishing certain causes of cervical lymphadenopathy and is particularly useful in differentiating metastatic and tuberculous lymph nodes. However, it has limited value in distinguishing lymphomatous nodes from metastatic and reactive nodes. PMID:27131839

  7. Structural health monitoring ultrasonic thickness measurement accuracy and reliability of various time-of-flight calculation methods

    NASA Astrophysics Data System (ADS)

    Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.

    2016-02-01

    The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.

  8. Geostatistical radar-raingauge merging: A novel method for the quantification of rain estimation accuracy

    NASA Astrophysics Data System (ADS)

    Delrieu, Guy; Wijbrans, Annette; Boudevillain, Brice; Faure, Dominique; Bonnifait, Laurent; Kirstetter, Pierre-Emmanuel

    2014-09-01

    Compared to other estimation techniques, one advantage of geostatistical techniques is that they provide an index of the estimation accuracy of the variable of interest with the kriging estimation standard deviation (ESD). In the context of radar-raingauge quantitative precipitation estimation (QPE), we address in this article the question of how the kriging ESD can be transformed into a local spread of error by using the dependency of radar errors to the rain amount analyzed in previous work. The proposed approach is implemented for the most significant rain events observed in 2008 in the Cévennes-Vivarais region, France, by considering both the kriging with external drift (KED) and the ordinary kriging (OK) methods. A two-step procedure is implemented for estimating the rain estimation accuracy: (i) first kriging normalized ESDs are computed by using normalized variograms (sill equal to 1) to account for the observation system configuration and the spatial structure of the variable of interest (rainfall amount, residuals to the drift); (ii) based on the assumption of a linear relationship between the standard deviation and the mean of the variable of interest, a denormalization of the kriging ESDs is performed globally for a given rain event by using a cross-validation procedure. Despite the fact that the KED normalized ESDs are usually greater than the OK ones (due to an additional constraint in the kriging system and a weaker spatial structure of the residuals to the drift), the KED denormalized ESDs are generally smaller the OK ones, a result consistent with the better performance observed for the KED technique. The evolution of the mean and the standard deviation of the rainfall-scaled ESDs over a range of spatial (5-300 km2) and temporal (1-6 h) scales demonstrates that there is clear added value of the radar with respect to the raingauge network for the shortest scales, which are those of interest for flash-flood prediction in the considered region.

  9. Determining CME parameters by fitting heliospheric observations: Numerical investigation of the accuracy of the methods

    NASA Astrophysics Data System (ADS)

    Lugaz, Noé; Roussev, Ilia I.; Gombosi, Tamas I.

    2011-07-01

    Transients in the heliosphere, including coronal mass ejections (CMEs) and corotating interaction regions can be imaged to large heliocentric distances by heliospheric imagers (HIs), such as the HIs onboard STEREO and SMEI onboard Coriolis. These observations can be analyzed using different techniques to derive the CME speed and direction. In this paper, we use a three-dimensional (3-D) magneto-hydrodynamic (MHD) numerical simulation to investigate one of these methods, the fitting method of Sheeley et al. (1999) and Rouillard et al. (2008). Because we use a 3-D simulation, we can determine with great accuracy the CME initial speed, its speed at 1 AU and its average transit speed as well as its size and direction of propagation. We are able to compare the results of the fitting method with the values from the simulation for different viewing angles between the CME direction of propagation and the Sun-spacecraft line. We focus on one simulation of a wide (120-140°) CME, whose initial speed is about 800 km s -1. For this case, we find that the best-fit speed is in good agreement with the speed of the CME at 1 AU, and this, independently of the viewing angle. The fitted direction of propagation is not in good agreement with the viewing angle in the simulation, although smaller viewing angles result in smaller fitted directions. This is due to the extremely wide nature of the ejection. A new fitting method, proposed to take into account the CME width, results in better agreement between fitted and actual directions for directions close to the Sun-Earth line. For other directions, it gives results comparable to the fitting method of Sheeley et al. (1999). The CME deceleration has only a small effect on the fitted direction, resulting in fitted values about 1-4° higher than the actual values.

  10. Multinomial tree models for assessing the status of the reference in studies of the accuracy of tools for binary classification

    PubMed Central

    Botella, Juan; Huang, Huiling; Suero, Manuel

    2013-01-01

    Studies that evaluate the accuracy of binary classification tools are needed. Such studies provide 2 × 2 cross-classifications of test outcomes and the categories according to an unquestionable reference (or gold standard). However, sometimes a suboptimal reliability reference is employed. Several methods have been proposed to deal with studies where the observations are cross-classified with an imperfect reference. These methods require that the status of the reference, as a gold standard or as an imperfect reference, is known. In this paper a procedure for determining whether it is appropriate to maintain the assumption that the reference is a gold standard or an imperfect reference, is proposed. This procedure fits two nested multinomial tree models, and assesses and compares their absolute and incremental fit. Its implementation requires the availability of the results of several independent studies. These should be carried out using similar designs to provide frequencies of cross-classification between a test and the reference under investigation. The procedure is applied in two examples with real data. PMID:24106484

  11. A study of the parameters affecting the accuracy of the total pore blocking method.

    PubMed

    Liekens, Anuschka; Cabooter, Deirdre; Denayer, Joeri; Desmet, Gert

    2010-10-22

    We report on a study wherein we investigate the different factors affecting the accuracy of the total pore blocking method to determine the interstitial volume of reversed-phase packed bed columns. Octane, nonane, decane and dodecane were all found to be suitable blocking agents, whereas heptane already dissolves too well in the applied fully aqueous buffers. The method of moments needs to be used to accurately determine the elution times, and a proper correction for the frit volume is needed. Failing to do so can lead to errors on the observed interstitial volume of the order of 2% or more. It has also been shown that the application of a high flow rate or a high pressure does not force the blocking agent out of the mesopores of the particles. The only potential source of loss of blocking agent is dissolution into the mobile phase (even though this is a buffered fully aqueous solution). This effect however only becomes significant after the elution of 400 geometrical column volumes, i.e., orders more than needed for a regular total pore blocking experiment. PMID:20580009

  12. High-accuracy measurement of low-water-content in liquid using NIR spectral absorption method

    NASA Astrophysics Data System (ADS)

    Peng, Bao-Jin; Wan, Xu; Jin, Hong-Zhen; Zhao, Yong; Mao, He-Fa

    2005-01-01

    Water content measurement technologies are very important for quality inspection of food, medicine products, chemical products and many other industry fields. In recent years, requests for accurate low-water-content measurement in liquid are more and more exigent, and great interests have been shown from the research and experimental work. With the development and advancement of modern production and control technologies, more accurate water content technology is needed. In this paper, a novel experimental setup based on near-infrared (NIR) spectral technology and fiber-optic sensor (OFS) is presented. It has a good measurement accuracy about -/+ 0.01%, which is better, to our knowledge, than most other methods published until now. It has a high measurement resolution of 0.001% in the measurement range from zero to 0.05% for water-in-alcohol measurement, and the water-in-oil measurement is carried out as well. In addition, the advantages of this method also include pollution-free to the measured liquid, fast measurement and so on.

  13. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  14. Novel micro-machining method basing on AFM and high-accuracy stage

    NASA Astrophysics Data System (ADS)

    Yan, Yongda; Sun, Tao; Dong, Shen; Cheng, Kai

    2003-01-01

    This paper presents an easy and novel mechanical micro-machining method. Combining the commercial AFM and the high accuracy stage and using a diamond tip as the cutting tool which acts as a single asperity, a mechanical micro-machining system is developed. Some experiments are carried out basing on this system. Influence of the diamond tip"s shape on micro machining is considered. And different machining techniques are compared in this paper. Using this method the intricate patterns (circle, flat, polygon flat and a gear geometry) are successfully fabricated. So the novel approach"s strength are as follows: It can machine several tens of microns micro-parts more easier and cheaper then the conventional technology. And it can image the micro-structure just after it is machined. Generally this technique can be used to machine the mask of other micro-machining process, the mold of micro-parts, or to machine on the micro-part which is fabricated by other ways.

  15. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-05-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  16. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  17. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.).

    PubMed

    Resende, M F R; Muñoz, P; Resende, M D V; Garrick, D J; Fernando, R L; Davis, J M; Jokela, E J; Martin, T A; Peter, G F; Kirst, M

    2012-04-01

    Genomic selection can increase genetic gain per generation through early selection. Genomic selection is expected to be particularly valuable for traits that are costly to phenotype and expressed late in the life cycle of long-lived species. Alternative approaches to genomic selection prediction models may perform differently for traits with distinct genetic properties. Here the performance of four different original methods of genomic selection that differ with respect to assumptions regarding distribution of marker effects, including (i) ridge regression-best linear unbiased prediction (RR-BLUP), (ii) Bayes A, (iii) Bayes Cπ, and (iv) Bayesian LASSO are presented. In addition, a modified RR-BLUP (RR-BLUP B) that utilizes a selected subset of markers was evaluated. The accuracy of these methods was compared across 17 traits with distinct heritabilities and genetic architectures, including growth, development, and disease-resistance properties, measured in a Pinus taeda (loblolly pine) training population of 951 individuals genotyped with 4853 SNPs. The predictive ability of the methods was evaluated using a 10-fold, cross-validation approach, and differed only marginally for most method/trait combinations. Interestingly, for fusiform rust disease-resistance traits, Bayes Cπ, Bayes A, and RR-BLUB B had higher predictive ability than RR-BLUP and Bayesian LASSO. Fusiform rust is controlled by few genes of large effect. A limitation of RR-BLUP is the assumption of equal contribution of all markers to the observed variation. However, RR-BLUP B performed equally well as the Bayesian approaches.The genotypic and phenotypic data used in this study are publically available for comparative analysis of genomic selection prediction models. PMID:22271763

  18. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  19. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  20. Effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann method for heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Li, Like; AuYeung, Nick; Mei, Renwei; Klausner, James F.

    2016-08-01

    We present a systematic study on the effects of tangential-type boundary condition discontinuities on the accuracy of the lattice Boltzmann equation (LBE) method for Dirichlet and Neumann problems in heat and mass transfer modeling. The second-order accurate boundary condition treatments for continuous Dirichlet and Neumann problems are directly implemented for the corresponding discontinuous boundary conditions. Results from three numerical tests, including both straight and curved boundaries, are presented to show the accuracy and order of convergence of the LBE computations. Detailed error assessments are conducted for the interior temperature or concentration (denoted as a scalar ϕ) and the interior derivatives of ϕ for both types of boundary conditions, for the boundary flux in the Dirichlet problem and for the boundary ϕ values in the Neumann problem. When the discontinuity point on the straight boundary is placed at the center of the unit lattice in the Dirichlet problem, it yields only first-order accuracy for the interior distribution of ϕ, first-order accuracy for the boundary flux, and zeroth-order accuracy for the interior derivatives compared with the second-order accuracy of all quantities of interest for continuous boundary conditions. On the lattice scale, the LBE solution for the interior derivatives near the singularity is largely independent of the resolution and correspondingly the local distribution of the absolute errors is almost invariant with the changing resolution. For Neumann problems, when the discontinuity is placed at the lattice center, second-order accuracy is preserved for the interior distribution of ϕ; and a "superlinear" convergence order of 1.5 for the boundary ϕ values and first-order accuracy for the interior derivatives are obtained. For straight boundaries with the discontinuity point arbitrarily placed within the lattice and curved boundaries, the boundary flux becomes zeroth-order accurate for Dirichlet problems

  1. Accuracy of Subjective Performance Appraisal is Not Modulated by the Method Used by the Learner During Motor Skill Acquisition.

    PubMed

    Patterson, Jae T; McRae, Matthew; Lai, Sharon

    2016-04-01

    The present experiment examined whether the method of subjectively appraising motor performance during skill acquisition would differentially strengthen performance appraisal capabilities and subsequent motor learning. Thirty-six participants (18 men and 18 women; M age = 20.8 years, SD = 1.0) learned to execute a serial key-pressing task at a particular overall movement time (2550 ms). Participants were randomly separated into three groups: the Generate group estimated their overall movement time then received knowledge of results of their actual movement time; the Choice group selected their perceived movement time from a list of three alternatives; the third group, the Control group, did not self-report their perceived movement time and received knowledge of results of their actual movement time on every trial. All groups practiced 90 acquisition trials and 30 no knowledge of results trials in a delayed retention test. Results from the delayed retention test showed that both methods of performance appraisal (Generate and Choice) facilitated superior motor performance and greater accuracy in assessing their actual motor performance compared with the control condition. Therefore, the processing required for accurate appraisal of performance was strengthened, independent of performance appraisal method. PMID:27166340

  2. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  3. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  4. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  5. The accuracy of molecular bond lengths computed by multireference electronic structure methods

    NASA Astrophysics Data System (ADS)

    Shepard, Ron; Kedziora, Gary S.; Lischka, Hans; Shavitt, Isaiah; Müller, Thomas; Szalay, Péter G.; Kállay, Mihály; Seth, Michael

    2008-06-01

    We compare experimental Re values with computed Re values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF Re values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI Re values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict Re values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed Re values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical advantage of using variational wave

  6. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  7. Geometric Accuracy Assessment of LANDSAT-4 Multispectral Scanner (MSS). [Washington, D.C.

    NASA Technical Reports Server (NTRS)

    Imhoff, M. L.; Alford, W. L.

    1984-01-01

    Standard LANDSAT-4 MSS digital image data were analyzed for geometric accuracy using two P-format (UTM projection) images of the Washington, D.C. area, scene day 109 (ID number 4010915140) and scene day 125 (ID number 4012515144). Both scenes were tested for geodetic registration accuracy (scene-to-map), temporal registration accuracy (scene-to-scene), and band-to-band registration accuracy (within a scene). The combined RMS error for geodetic registration accuracy was 0.43 pixel (25.51 meters), well within specifications. The comparison between the 2 scenes was made on a band-by-band basis. The 90 percent error figure for temporal registration was 0.68 (57x57) pixel (38.8 meters). Although this figure is larger than the specification, it can be considered excellent with respect to user application. The best case registration errors between bands 1 and 2, and 3 and 4 were 14.2m and 13.7m, respectively, both within specifications. The worst case registration error was 38.0 m between bands 2 and 3.

  8. The Accuracy of a Method for Printing Three-Dimensional Spinal Models

    PubMed Central

    Wang, Jian-Shun; Yang, Xin-Dong; Weng, Wan-Qing; Wang, Xiang-Yang; Xu, Hua-Zi; Chi, Yong-Long; Lin, Zhong-Ke

    2015-01-01

    Background To study the morphology of the human spine and new spinal fixation methods, scientists require cadaveric specimens, which are dependent on donation. However, in most countries, the number of people willing to donate their body is low. A 3D printed model could be an alternative method for morphology research, but the accuracy of the morphology of a 3D printed model has not been determined. Methods Forty-five computed tomography (CT) scans of cervical, thoracic and lumbar spines were obtained, and 44 parameters of the cervical spine, 120 parameters of the thoracic spine, and 50 parameters of the lumbar spine were measured. The CT scan data in DICOM format were imported into Mimics software v10.01 for 3D reconstruction, and the data were saved in .STL format and imported to Cura software. After a 3D digital model was formed, it was saved in Gcode format and exported to a 3D printer for printing. After the 3D printed models were obtained, the above-referenced parameters were measured again. Results Paired t-tests were used to determine the significance, set to P<0.05, of all parameter data from the radiographic images and 3D printed models. Furthermore, 88.6% of all parameters of the cervical spine, 90% of all parameters of the thoracic spine, and 94% of all parameters of the lumbar spine had Intraclass Correlation Coefficient (ICC) values >0.800. The other ICC values were <0.800 and >0.600; none were <0.600. Conclusion In this study, we provide a protocol for printing accurate 3D spinal models for surgeons and researchers. The resulting 3D printed model is inexpensive and easily obtained for spinal fixation research. PMID:25915641

  9. A new method for the accuracy evaluation of a manufactured piece

    NASA Astrophysics Data System (ADS)

    Oniga, E. V.; Cardei, M.

    2015-11-01

    To evaluate the accuracy of a manufactured piece, it must be measured and compared with a reference model, namely the designed 3D model, based on geometrical elements. In this paper a new method for the precision evaluation of a manufactured piece is proposed, which implies the creation of the piece digital 3D model based on digital images and its transformation into a 3D mesh surface. The differences between the two models, the designed model and the new created one, are calculated using the Hausdorff distance. The aim of this research is to determine the differences between two 3D models, especially CAD models, with high precision, in a completely automated way. To obtain the results, a small piece has been photographed with a digital camera, that was calibrated using a 3D calibration object, a target consisting of a number of 42 points, 36 placed in the corners of 9 wood cubes with different heights and 6 of them placed at the middle of the distance between the cubes, on a board. This target was previously tested, the tests showing that using this calibration target instead of a 2D calibration grid, the precision of the final 3D model is improved with approximatly 50%. The 3D model of the manufactured piece was created using two methods. First, based on digital images, a point cloud was automatically generated and after the filtering process, the remaining points were interpolated, obtaining the piece 3D model as a mesh surface. Second, the piece 3D model was created using also the digital images, based on its characteristic points, resulting a CAD model, that was transformed into a mesh surface. Finally, the two 3D models were compared with the designed model, using the CloudCompare software, thus resulting the imperfections of the manufactured piece. The proposed method highlights the differences between the two models using a color palette, offering at the same time a global comparison.

  10. Accuracy and reproducibility of a novel dynamic volume flow measurement method.

    PubMed

    Ricci, Stefano; Cinthio, Magnus; Ahlgren, Asa Rydén; Tortoli, Piero

    2013-10-01

    In clinical practice, blood volume flow (BVF) is typically calculated assuming a perfect parabolic and axisymmetric velocity distribution. This simple approach cannot account for the complex flow configurations that are produced by vessel curvatures, pulsatility and diameter changes and, therefore, results in a poor estimation. Application of the Womersley model allows compensation for the flow distortion caused by pulsatility and, with some adjustment, the effects of slight curvatures, but several problems remain unanswered. Two- and three-dimensional approaches can acquire the actual velocity field over the whole vessel section, but are typically affected by a limited temporal resolution. The multigate technique allows acquisition of the actual velocity profile over a line intersecting the vessel lumen and, when coupled with a suitable wall-tracking method, can offer the ideal trade-off among attainable accuracy, temporal resolution and required calculation power. In this article, we describe a BVF measurement method based on the multigate spectral Doppler and a B-mode edge detector algorithm for wall-position tracking. The method has been extensively tested on the research platform ULA-OP, with more than 1700 phantom measurements at flow rates between 60 and 750 mL/min, steering angles between 10 ° and 22 ° and constant, sinusoidal or pulsed flow trends. In the averaged BVF measurement, we found an underestimation of about -5% and a coefficient of variability (CV) less than 6%. In instantaneous measurements (e.g., systolic peak) the CV was in the range 2%-8.5%. These results were confirmed by a preliminary test on the common carotid artery of 10 volunteers (CV = 2%-11%). PMID:23849385

  11. Integration of classification methods for improvement of land-cover map accuracy

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Hua; Skidmore, A. K.; Van Oosten, H.

    Classifiers, which are used to recognize patterns in remotely sensing images, have complementary capabilities. This study tested whether integrating the results from individual classifiers improves classification accuracy. Two integrated approaches were undertaken. One approach used a consensus builder (CSB) to adjust classification output in the case of disagreement in classification between maximum likelihood classifier (MLC), expert system classifier (ESC) and neural network classifier (NNC). If the output classes for each individual pixel differed, the producer accuracies for each class were compared and the class with the highest producer accuracy was assigned to the pixel. The consensus builder approach resulted in a classification with a slightly lower accuracy (72%) when compared with the neural network classifier (74%), but it did significantly better than the maximum likelihood (62%) and expert system (59%) classifiers. The second approach integrated a rule-based expert system classifier and a neural network classifier. The output of the expert system classifier was used as one additional new input layer of the neural network classifier. A postprocessing using the producer accuracies and some additional expert rules was applied to improve the output of the integrated classifier. This is a relatively new approach in the field of image processing. This second approach produced the highest overall accuracy (80%). Thus, incorporating correct, complete and relevant expert knowledge in a neural network classifier leads to higher classification accuracy.

  12. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique.

    PubMed

    Velazquez, L; Castro-Palacio, J C

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

  13. A method of increasing test range and accuracy of bioindicators: Geobacillus stearothermophilus spores.

    PubMed

    Lundahl, Gunnel

    2003-01-01

    Spores of Geobacillus stearothermophilus are very sensitive to changes in temperature. When validating sterilizing processes, the most common bioindicator (BI) is spores of Geobacillus stearothermophilus ATCC12980 and ATCC7953 with about 10(6) spores /BI and a D121-value of about 2 minutes in water. Because these spores of Geobacillus stearothermophilus do not survive at a F0-value above 12 minutes, it has not been possible to evaluate the agreement between the biological F-value (F(BIO)) and physical measurements (time and temperature) when the physical F0-value exceeds that limit. However, it has been proven that glycerin substantially increases the heat resistance of the spores, and it is possible to utilize that property when manufacturing BIs suitable to use in processes with longer sterilization time or high temperature (above 121 degrees C). By the method described, it is possible to make use of the sensitivity and durability of Geobacillus stearothermophilus' spores when glycerin has increased both test range and accuracy. Experience from years of development and validation work with the use of the highly sensitive glycerin-water-spore-suspension sensor (GWS-sensor) is reported. Validation of the steam sterilization process at high temperature has been possible with the use of GWS-sensors. It has also been shown that the spores in suspension keep their characteristics for a period of 19 months when stored cold (8 degrees C). PMID:14558699

  14. A Method for Evaluating Timeliness and Accuracy of Volitional Motor Responses to Vibrotactile Stimuli.

    PubMed

    Leineweber, Matthew J; Shi, Sam; Andrysek, Jan

    2016-01-01

    Artificial sensory feedback (ASF) systems can be used to compensate for lost proprioception in individuals with lower-limb impairments. Effective design of these ASF systems requires an in-depth understanding of how the parameters of specific feedback mechanism affect user perception and reaction to stimuli. This article presents a method for applying vibrotactile stimuli to human participants and measuring their response. Rotating mass vibratory motors are placed at pre-defined locations on the participant's thigh, and controlled through custom hardware and software. The speed and accuracy of participants' volitional responses to vibrotactile stimuli are measured for researcher-specified combinations of motor placement and vibration frequency. While the protocol described here uses push-buttons to collect a simple binary response to the vibrotactile stimuli, the technique can be extended to other response mechanisms using inertial measurement units or pressure sensors to measure joint angle and weight bearing ratios, respectively. Similarly, the application of vibrotactile stimuli can be explored for body segments other than the thigh. PMID:27585366

  15. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  16. Comparison of reconstruction methods and quantitative accuracy in Siemens Inveon PET scanner

    NASA Astrophysics Data System (ADS)

    Ram Yu, A.; Kim, Jin Su; Kang, Joo Hyun; Moo Lim, Sang

    2015-04-01

    PET reconstruction is key to the quantification of PET data. To our knowledge, no comparative study of reconstruction methods has been performed to date. In this study, we compared reconstruction methods with various filters in terms of their spatial resolution, non-uniformities (NU), recovery coefficients (RCs), and spillover ratios (SORs). In addition, the linearity of reconstructed radioactivity between linearity of measured and true concentrations were also assessed. A Siemens Inveon PET scanner was used in this study. Spatial resolution was measured with NEMA standard by using a 1 mm3 sized 18F point source. Image quality was assessed in terms of NU, RC and SOR. To measure the effect of reconstruction algorithms and filters, data was reconstructed using FBP, 3D reprojection algorithm (3DRP), ordered subset expectation maximization 2D (OSEM 2D), and maximum a posteriori (MAP) with various filters or smoothing factors (β). To assess the linearity of reconstructed radioactivity, image quality phantom filled with 18F was used using FBP, OSEM and MAP (β =1.5 & 5 × 10-5). The highest achievable volumetric resolution was 2.31 mm3 and the highest RCs were obtained when OSEM 2D was used. SOR was 4.87% for air and 3.97% for water, obtained OSEM 2D reconstruction was used. The measured radioactivity of reconstruction image was proportional to the injected one for radioactivity below 16 MBq/ml when FBP or OSEM 2D reconstruction methods were used. By contrast, when the MAP reconstruction method was used, activity of reconstruction image increased proportionally, regardless of the amount of injected radioactivity. When OSEM 2D or FBP were used, the measured radioactivity concentration was reduced by 53% compared with true injected radioactivity for radioactivity <16 MBq/ml. The OSEM 2D reconstruction method provides the highest achievable volumetric resolution and highest RC among all the tested methods and yields a linear relation between the measured and true

  17. Method of mucociliary clearance assessment

    NASA Astrophysics Data System (ADS)

    Danilova, Tatiana V.; Manturov, Alexey O.; Ermakov, Igor Y.; Mareev, Gleb O.; Mareev, Oleg V.

    2016-04-01

    The article is devoted to the research capabilities of mucociliary clearance in the nasal cavity and paranasal sinuses using modern techniques of digital video recording and processing. We describe the setup and software for this method and the results of our research. Using microscope and digital camera we can provide a good method to study mucociliary clearance and by usage of special software we able to measure some characteristic of nasal mucosae and its main function.

  18. Pharmacokinetic digital phantoms for accuracy assessment of image-based dosimetry in 177Lu-DOTATATE peptide receptor radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Brolin, Gustav; Gustafsson, Johan; Ljungberg, Michael; Sjögreen Gleisner, Katarina

    2015-08-01

    Patient-specific image-based dosimetry is considered to be a useful tool to limit toxicity associated with peptide receptor radionuclide therapy (PRRT). To facilitate the establishment and reliability of absorbed-dose response relationships, it is essential to assess the accuracy of dosimetry in clinically realistic scenarios. To this end, we developed pharmacokinetic digital phantoms corresponding to patients treated with 177Lu-DOTATATE. Three individual voxel phantoms from the XCAT population were generated and assigned a dynamic activity distribution based on a compartment model for 177Lu-DOTATATE, designed specifically for this purpose. The compartment model was fitted to time-activity data from 10 patients, primarily acquired using quantitative scintillation camera imaging. S values for all phantom source-target combinations were calculated based on Monte-Carlo simulations. Combining the S values and time-activity curves, reference values of the absorbed dose to the phantom kidneys, liver, spleen, tumours and whole-body were calculated. The phantoms were used in a virtual dosimetry study, using Monte-Carlo simulated gamma-camera images and conventional methods for absorbed-dose calculations. The characteristics of the SPECT and WB planar images were found to well represent those of real patient images, capturing the difficulties present in image-based dosimetry. The phantoms are expected to be useful for further studies and optimisation of clinical dosimetry in 177Lu PRRT.

  19. Accuracy Assessment of a Canal-Tunnel 3d Model by Comparing Photogrammetry and Laserscanning Recording Techniques

    NASA Astrophysics Data System (ADS)

    Charbonnier, P.; Chavant, P.; Foucher, P.; Muzet, V.; Prybyla, D.; Perrin, T.; Grussenmeyer, P.; Guillemin, S.

    2013-07-01

    With recent developments in the field of technology and computer science, conventional methods are being supplanted by laser scanning and digital photogrammetry. These two different surveying techniques generate 3-D models of real world objects or structures. In this paper, we consider the application of terrestrial Laser scanning (TLS) and photogrammetry to the surveying of canal tunnels. The inspection of such structures requires time, safe access, specific processing and professional operators. Therefore, a French partnership proposes to develop a dedicated equipment based on image processing for visual inspection of canal tunnels. A 3D model of the vault and side walls of the tunnel is constructed from images recorded onboard a boat moving inside the tunnel. To assess the accuracy of this photogrammetric model (PM), a reference model is build using static TLS. We here address the problem comparing the resulting point clouds. Difficulties arise because of the highly differentiated acquisition processes, which result in very different point densities. We propose a new tool, designed to compare differences between pairs of point cloud or surfaces (triangulated meshes). Moreover, dealing with huge datasets requires the implementation of appropriate structures and algorithms. Several techniques are presented : point-to-point, cloud-to-cloud and cloud-to-mesh. In addition farthest point resampling, octree structure and Hausdorff distance are adopted and described. Experimental results are shown for a 475 m long canal tunnel located in France.

  20. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  1. Pharmacokinetic digital phantoms for accuracy assessment of image-based dosimetry in (177)Lu-DOTATATE peptide receptor radionuclide therapy.

    PubMed

    Brolin, Gustav; Gustafsson, Johan; Ljungberg, Michael; Gleisner, Katarina Sjögreen

    2015-08-01

    Patient-specific image-based dosimetry is considered to be a useful tool to limit toxicity associated with peptide receptor radionuclide therapy (PRRT). To facilitate the establishment and reliability of absorbed-dose response relationships, it is essential to assess the accuracy of dosimetry in clinically realistic scenarios. To this end, we developed pharmacokinetic digital phantoms corresponding to patients treated with (177)Lu-DOTATATE. Three individual voxel phantoms from the XCAT population were generated and assigned a dynamic activity distribution based on a compartment model for (177)Lu-DOTATATE, designed specifically for this purpose. The compartment model was fitted to time-activity data from 10 patients, primarily acquired using quantitative scintillation camera imaging. S values for all phantom source-target combinations were calculated based on Monte-Carlo simulations. Combining the S values and time-activity curves, reference values of the absorbed dose to the phantom kidneys, liver, spleen, tumours and whole-body were calculated. The phantoms were used in a virtual dosimetry study, using Monte-Carlo simulated gamma-camera images and conventional methods for absorbed-dose calculations. The characteristics of the SPECT and WB planar images were found to well represent those of real patient images, capturing the difficulties present in image-based dosimetry. The phantoms are expected to be useful for further studies and optimisation of clinical dosimetry in (177)Lu PRRT. PMID:26215085

  2. Spatial and Temporal Analysis on the Distribution of Active Radio-Frequency Identification (RFID) Tracking Accuracy with the Kriging Method

    PubMed Central

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  3. Spatial and temporal analysis on the distribution of active radio-frequency identification (RFID) tracking accuracy with the Kriging method.

    PubMed

    Liu, Xin; Shannon, Jeremy; Voun, Howard; Truijens, Martijn; Chi, Hung-Lin; Wang, Xiangyu

    2014-01-01

    Radio frequency identification (RFID) technology has already been applied in a number of areas to facilitate the tracking process. However, the insufficient tracking accuracy of RFID is one of the problems that impedes its wider application. Previous studies focus on examining the accuracy of discrete points RFID, thereby leaving the tracking accuracy of the areas between the observed points unpredictable. In this study, spatial and temporal analysis is applied to interpolate the continuous distribution of RFID tracking accuracy based on the Kriging method. An implementation trial has been conducted in the loading and docking area in front of a warehouse to validate this approach. The results show that the weak signal area can be easily identified by the approach developed in the study. The optimum distance between two RFID readers and the effect of the sudden removal of readers are also presented by analysing the spatial and temporal variation of RFID tracking accuracy. This study reveals the correlation between the testing time and the stability of RFID tracking accuracy. Experimental results show that the proposed approach can be used to assist the RFID system setup process to increase tracking accuracy. PMID:25356648

  4. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  5. Accuracy of forced oscillation technique to assess lung function in geriatric COPD population

    PubMed Central

    Tse, Hoi Nam; Tseng, Cee Zhung Steven; Wong, King Ying; Yee, Kwok Sang; Ng, Lai Yun

    2016-01-01

    Introduction Performing lung function test in geriatric patients has never been an easy task. With well-established evidence indicating impaired small airway function and air trapping in patients with geriatric COPD, utilizing forced oscillation technique (FOT) as a supplementary tool may aid in the assessment of lung function in this population. Aims To study the use of FOT in the assessment of airflow limitation and air trapping in geriatric COPD patients. Study design A cross-sectional study in a public hospital in Hong Kong. ClinicalTrials.gov ID: NCT01553812. Methods Geriatric patients who had spirometry-diagnosed COPD were recruited, with both FOT and plethysmography performed. “Resistance” and “reactance” FOT parameters were compared to plethysmography for the assessment of air trapping and airflow limitation. Results In total, 158 COPD subjects with a mean age of 71.9±0.7 years and percentage of forced expiratory volume in 1 second of 53.4±1.7 L were recruited. FOT values had a good correlation (r=0.4–0.7) to spirometric data. In general, X values (reactance) were better than R values (resistance), showing a higher correlation with spirometric data in airflow limitation (r=0.07–0.49 vs 0.61–0.67), small airway (r=0.05–0.48 vs 0.56–0.65), and lung volume (r=0.12–0.29 vs 0.43–0.49). In addition, resonance frequency (Fres) and frequency dependence (FDep) could well identify the severe type (percentage of forced expiratory volume in 1 second <50%) of COPD with high sensitivity (0.76, 0.71) and specificity (0.72, 0.64) (area under the curve: 0.8 and 0.77, respectively). Moreover, X values could stratify different severities of air trapping, while R values could not. Conclusion FOT may act as a simple and accurate tool in the assessment of severity of airflow limitation, small and central airway function, and air trapping in patients with geriatric COPD who have difficulties performing conventional lung function test. Moreover, reactance

  6. Methods of airway resistance assessment.

    PubMed

    Urbankowski, Tomasz; Przybyłowski, Tadeusz

    2016-01-01

    Airway resistance is the ratio of driving pressure to the rate of the airflow in the airways. The most frequent methods used to measure airway resistance are whole-body plethysmography, the interrupter technique and the forced oscillation technique. All these methods allow to measure resistance during respiration at the level close to tidal volume, they do not require forced breathing manoeuvres or deep breathing during measurement. The most popular method for measuring airway resistance is whole-body plethysmography. The results of plethysmography include among others the following parameters: airway resistance (Raw), airway conductance (Gaw), specific airway resistance (sRaw) and specific airway conductance (sGaw). The interrupter technique is based on the assumption that at the moment of airway occlusion, air pressure in the mouth is equal to the alveolar pressure . In the forced oscillation technique (FOT), airway resistance is calculated basing on the changes in pressure and flow caused by air vibration. The methods for measurement of airway resistance that are described in the present paper seem to be a useful alternative to the most common lung function test - spirometry. The target group in which these methods may be widely used are particularly the patients who are unable to perform spirometry. PMID:27238174

  7. Long-term deflections of reinforced concrete elements: accuracy analysis of predictions by different methods

    NASA Astrophysics Data System (ADS)

    Gribniak, Viktor; Bacinskas, Darius; Kacianauskas, Rimantas; Kaklauskas, Gintaris; Torres, Lluis

    2013-08-01

    Long-term deflection response of reinforced concrete flexural members is influenced by the interaction of complex physical phenomena, such as concrete creep, shrinkage and cracking, which makes their prediction difficult. A number of approaches are proposed by design codes with different degrees of simplification and accuracy. This paper statistically investigates accuracy of long-term deflection predictions made by some of the most widely used design codes ( Eurocode 2, ACI 318, ACI 435, and the new Russian code SP 52-101) and a numerical technique proposed by the authors. The accuracy is analyzed using test data of 322 reinforced concrete members from 27 test programs reported in the literature. The predictions of each technique are discussed, and a comparative analysis is made showing the influence of different parameters, such as sustained loading duration, compressive strength of concrete, loading intensity and reinforcement ratio, on the prediction accuracy.

  8. Assessing the Accuracy of a Child's Account of Sexual Abuse: A Case Study.

    ERIC Educational Resources Information Center

    Orbach, Yael; Lamb, Michael E.

    1999-01-01

    This study examined the accuracy of a 13-year-old girl's account of a sexually abusive incident. Information given by the victim was compared with an audiotaped record. Over 50% of information reported by the victim was corroborated by the audio record and 64% was confirmed by more than one source. (Author/CR)

  9. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  10. Comparative analysis of Worldview-2 and Landsat 8 for coastal saltmarsh mapping accuracy assessment

    NASA Astrophysics Data System (ADS)

    Rasel, Sikdar M. M.; Chang, Hsing-Chung; Diti, Israt Jahan; Ralph, Tim; Saintilan, Neil

    2016-05-01

    Coastal saltmarsh and their constituent components and processes are of an interest scientifically due to their ecological function and services. However, heterogeneity and seasonal dynamic of the coastal wetland system makes it challenging to map saltmarshes with remotely sensed data. This study selected four important saltmarsh species Pragmitis australis, Sporobolus virginicus, Ficiona nodosa and Schoeloplectus sp. as well as a Mangrove and Pine tree species, Avecinia and Casuarina sp respectively. High Spatial Resolution Worldview-2 data and Coarse Spatial resolution Landsat 8 imagery were selected in this study. Among the selected vegetation types some patches ware fragmented and close to the spatial resolution of Worldview-2 data while and some patch were larger than the 30 meter resolution of Landsat 8 data. This study aims to test the effectiveness of different classifier for the imagery with various spatial and spectral resolutions. Three different classification algorithm, Maximum Likelihood Classifier (MLC), Support Vector Machine (SVM) and Artificial Neural Network (ANN) were tested and compared with their mapping accuracy of the results derived from both satellite imagery. For Worldview-2 data SVM was giving the higher overall accuracy (92.12%, kappa =0.90) followed by ANN (90.82%, Kappa 0.89) and MLC (90.55%, kappa = 0.88). For Landsat 8 data, MLC (82.04%) showed the highest classification accuracy comparing to SVM (77.31%) and ANN (75.23%). The producer accuracy of the classification results were also presented in the paper.

  11. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  12. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  13. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  14. Accuracy, Confidence, and Calibration: How Young Children and Adults Assess Credibility

    ERIC Educational Resources Information Center

    Tenney, Elizabeth R.; Small, Jenna E.; Kondrad, Robyn L.; Jaswal, Vikram K.; Spellman, Barbara A.

    2011-01-01

    Do children and adults use the same cues to judge whether someone is a reliable source of information? In 4 experiments, we investigated whether children (ages 5 and 6) and adults used information regarding accuracy, confidence, and calibration (i.e., how well an informant's confidence predicts the likelihood of being correct) to judge informants'…

  15. Interrater Reliability Estimators Commonly Used in Scoring Language Assessments: A Monte Carlo Investigation of Estimator Accuracy

    ERIC Educational Resources Information Center

    Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.

    2014-01-01

    Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…

  16. Portable device to assess dynamic accuracy of global positioning systems (GPS) receivers used in agricultural aircraft

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A device was designed to test the dynamic accuracy of Global Positioning System (GPS) receivers used in aerial vehicles. The system works by directing a sun-reflected light beam from the ground to the aircraft using mirrors. A photodetector is placed pointing downward from the aircraft and circuitry...

  17. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    ERIC Educational Resources Information Center

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  18. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  19. Applying Signal-Detection Theory to the Study of Observer Accuracy and Bias in Behavioral Assessment

    ERIC Educational Resources Information Center

    Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Bellaci, Emily; Miller, Jonathan; Karp, Hilary; Mahmood, Angela; Strobel, Maggie; Mullen, Shelley; Keyl, Alice; Toupard, Alexis

    2010-01-01

    We evaluated the feasibility and utility of a laboratory model for examining observer accuracy within the framework of signal-detection theory (SDT). Sixty-one individuals collected data on aggression while viewing videotaped segments of simulated teacher-child interactions. The purpose of Experiment 1 was to determine if brief feedback and…

  20. Comparative accuracy of different risk scores in assessing cardiovascular risk in Indians: A study in patients with first myocardial infarction

    PubMed Central

    Bansal, Manish; Kasliwal, Ravi R.; Trehan, Naresh

    2014-01-01

    Background Although a number of risk assessment models are available for estimating 10-year risk of cardiovascular (CV) events in patients requiring primary prevention of CV disease, the predictive accuracy of the contemporary risk models has not been adequately evaluated in Indians. Methods 149 patients [mean age 59.4 ± 10.6 years; 123 (82.6%) males] without prior CV disease and presenting with acute myocardial infarction (MI) were included. The four clinically most relevant risk assessment models [Framingham Risk score (RiskFRS), World Health Organization risk prediction charts (RiskWHO), American College of Cardiology/American Heart Association pooled cohort equations (RiskACC/AHA) and the 3rd Joint British Societies' risk calculator (RiskJBS)] were applied to estimate what would have been their predicted 10-year risk of CV events if they had presented just prior to suffering the acute MI. Results RiskWHO provided the lowest risk estimates with 86.6% patients estimated to be having <20% 10-year risk. In comparison, RiskFRS and RiskACC/AHA returned higher risk estimates (61.7% and 69.8% with risk <20%, respectively; p values <0.001 for comparison with RiskWHO). However, the RiskJBS identified the highest proportion of the patients as being at high-risk (only 44.1% at <20% risk, p values 0 < 0.01 for comparison with all the other 3 risk scores). Conclusions This is the first study to show that in Indian patients presenting with acute MI, RiskJBS is likely to identify the largest proportion of the patients as at ‘high-risk’ as compared to RiskWHO, RiskFRS and RiskACC/AHA. However, large-scale prospective studies are needed to confirm these findings. PMID:25634388

  1. The Effects of Various Item Selection Methods on the Classification Accuracy and Classification Consistency of Criterion-Referenced Instruments.

    ERIC Educational Resources Information Center

    Smith, Douglas U.

    This study examined the effects of certain item selection methods on the classification accuracy and classification consistency of criterion-referenced instruments. Three item response data sets, representing varying situations of instructional effectiveness, were simulated. Five methods of item selection were then applied to each data set for the…

  2. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  3. Accuracy Assessment of Direct Georeferencing for Photogrammetric Applications on Small Unmanned Aerial Platforms

    NASA Astrophysics Data System (ADS)

    Mian, O.; Lutes, J.; Lipa, G.; Hutton, J. J.; Gavelle, E.; Borghini, S.

    2016-03-01

    Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was

  4. Accuracy of a Mitral Valve Segmentation Method Using J-Splines for Real-Time 3D Echocardiography Data

    PubMed Central

    Siefert, Andrew W.; Icenogle, David A.; Rabbah, Jean-Pierre; Saikrishnan, Neelakantan; Rossignac, Jarek; Lerakis, Stamatios; Yoganathan, Ajit P.

    2013-01-01

    Patient-specific models of the heart’s mitral valve (MV) exhibit potential for surgical planning. While advances in 3D echocardiography (3DE) have provided adequate resolution to extract MV leaflet geometry, no study has quantitatively assessed the accuracy of their modeled leaflets versus a ground-truth standard for temporal frames beyond systolic closure or for differing valvular dysfunctions. The accuracy of a 3DE-based segmentation methodology based on J-splines was assessed for porcine MVs with known 4D leaflet coordinates within a pulsatile simulator during closure, peak closure, and opening for a control, prolapsed, and billowing MV model. For all time points, the mean distance error between the segmented models and ground-truth data were 0.40±0.32 mm, 0.52±0.51 mm, and 0.74±0.69 mm for the control, flail, and billowing models. For all models and temporal frames, 95% of the distance errors were below 1.64 mm. When applied to a patient data set, segmentation was able to confirm a regurgitant orifice and post-operative improvements in coaptation. This study provides an experimental platform for assessing the accuracy of an MV segmentation methodology at phases beyond systolic closure and for differing MV dysfunctions. Results demonstrate the accuracy of a MV segmentation methodology for the development of future surgical planning tools. PMID:23460042

  5. Accuracy assessment of airborne photogrammetrically derived high-resolution digital elevation models in a high mountain environment

    NASA Astrophysics Data System (ADS)

    Müller, Johann; Gärtner-Roer, Isabelle; Thee, Patrick; Ginzler, Christian

    2014-12-01

    High-resolution digital elevation models (DEMs) generated by airborne remote sensing are frequently used to analyze landform structures (monotemporal) and geomorphological processes (multitemporal) in remote areas or areas of extreme terrain. In order to assess and quantify such structures and processes it is necessary to know the absolute accuracy of the available DEMs. This study assesses the absolute vertical accuracy of DEMs generated by the High Resolution Stereo Camera-Airborne (HRSC-A), the Leica Airborne Digital Sensors 40/80 (ADS40 and ADS80) and the analogue camera system RC30. The study area is located in the Turtmann valley, Valais, Switzerland, a glacially and periglacially formed hanging valley stretching from 2400 m to 3300 m a.s.l. The photogrammetrically derived DEMs are evaluated against geodetic field measurements and an airborne laser scan (ALS). Traditional and robust global and local accuracy measurements are used to describe the vertical quality of the DEMs, which show a non Gaussian distribution of errors. The results show that all four sensor systems produce DEMs with similar accuracy despite their different setups and generations. The ADS40 and ADS80 (both with a ground sampling distance of 0.50 m) generate the most accurate DEMs in complex high mountain areas with a RMSE of 0.8 m and NMAD of 0.6 m They also show the highest accuracy relating to flying height (0.14‰). The pushbroom scanning system HRSC-A produces a RMSE of 1.03 m and a NMAD of 0.83 m (0.21‰ accuracy of the flying height and 10 times the ground sampling distance). The analogue camera system RC30 produces DEMs with a vertical accuracy of 1.30 m RMSE and 0.83 m NMAD (0.17‰ accuracy of the flying height and two times the ground sampling distance). It is also shown that the performance of the DEMs strongly depends on the inclination of the terrain. The RMSE of areas up to an inclination <40° is better than 1 m. In more inclined areas the error and outlier occurrence

  6. Assessing the accuracy of software predictions of mammalian and microbial metabolites

    EPA Science Inventory

    New chemical development and hazard assessments benefit from accurate predictions of mammalian and microbial metabolites. Fourteen biotransformation libraries encoded in eight software packages that predict metabolite structures were assessed for their sensitivity (proportion of ...

  7. Measurement Precision and Accuracy of the Centre Location of AN Ellipse by Weighted Centroid Method

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2015-03-01

    Circular targets are often utilized in photogrammetry, and a circle on a plane is projected as an ellipse onto an oblique image. This paper reports an analysis conducted in order to investigate the measurement precision and accuracy of the centre location of an ellipse on a digital image by an intensity-weighted centroid method. An ellipse with a semi-major axis a, a semi-minor axis b, and a rotation angle θ of the major axis is investigated. In the study an equivalent radius r = (a2cos2θ + b2sin2θ)1/2 is adopted as a measure of the dimension of an ellipse. First an analytical expression representing a measurement error (ϵx, ϵy,) is obtained. Then variances Vx of ϵx are obtained at 1/256 pixel intervals from 0.5 to 100 pixels in r by numerical integration, because a formula representing Vx is unable to be obtained analytically when r > 0.5. The results of the numerical integration indicate that Vxwould oscillate in a 0.5 pixel cycle in r and Vx excluding the oscillation component would be inversely proportional to the cube of r. Finally an effective approximate formula of Vx from 0.5 to 100 pixels in r is obtained by least squares adjustment. The obtained formula is a fractional expression of which numerator is a fifth-degree polynomial of {r-0.5×int(2r)} expressing the oscillation component and denominator is the cube of r. Here int(x) is the function to return the integer part of the value x. Coefficients of the fifth-degree polynomial of the numerator can be expressed by a quadratic polynomial of {0.5×int(2r)+0.25}.

  8. Accuracy Assessment of Geometrical Elements for Setting-Out in Horizontal Plane of Conveying Chambers at the Bauxite Mine "KOSTURI" Srebrenica

    NASA Astrophysics Data System (ADS)

    Milutinović, Aleksandar; Ganić, Aleksandar; Tokalić, Rade

    2014-03-01

    Setting-out of objects on the exploitation field of the mine, both in surface mining and in the underground mines, is determined by the specified setting-out accuracy of reference points, which are best to define spatial position of the object projected. For the purpose of achieving of the specified accuracy, it is necessary to perform a priori accuracy assessment of parameters, which are to be used when performing setting-out. Based on the a priori accuracy assessment, verification of the quality of geometrical setting- -out elements specified in the layout; definition of the accuracy for setting-out of geometrical elements; selection of setting-out method; selection at the type and class of instruments and tools that need to be applied in order to achieve predefined accuracy. The paper displays the accuracy assessment of geometrical elements for setting-out of the main haul gallery, haul downcast and helical conveying downcasts in shape of an inclined helix in horizontal plane, using the example of the underground bauxite mine »Kosturi«, Srebrenica. Wytyczanie obiektów na polu wydobywczym w kopalniach, zarówno podziemnych jak i odkrywkowych, zależy w dużej mierze od określonej dokładności wytyczania punktów referencyjnych, przy pomocy których określane jest następnie położenie przestrzenne pozostałych obiektów. W celu uzyskania założonej dokładności, należy przeprowadzić wstępną analizę dokładności oszacowania parametrów które następnie wykorzystane będą w procesie wytyczania. W oparciu o wyniki wstępnej analizy dokładności dokonuje się weryfikacji jakości geometrycznego wytyczenia elementów zaznaczonych na szkicu, uwzględniając te wyniki dobrać należy odpowiednią metodę wytyczania i rodzaj oraz klasę wykorzystywanych narzędzi i instrumentów, tak by osiągnąć założony poziom dokładności. W pracy przedstawiono oszacowanie dokładności wytyczania elementów geometrycznych dla głównego chodnika transportowego

  9. Accuracy in Student Self-Assessment: Directions and Cautions for Research

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Andrade, Heidi L.; Chen, Fei

    2015-01-01

    Student self-assessment is a central component of current conceptions of formative and classroom assessment. The research on self-assessment has focused on its efficacy in promoting both academic achievement and self-regulated learning, with little concern for issues of validity. Because reliability of testing is considered a sine qua non for the…

  10. 12 CFR 620.3 - Accuracy of reports and assessment of internal control over financial reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... assessment of internal control over financial reporting. Annual reports of those institutions with over $1... assessing the effectiveness of the institution's internal control over financial reporting. The assessment... the prior fiscal year) must disclose any material change(s) in the internal control over...

  11. Accuracy assessment of the global ionospheric model over the Southern Ocean based on dynamic observation

    NASA Astrophysics Data System (ADS)

    Luo, xiaowen

    2016-04-01

    The global ionospheric model based on the reference stations of the Global Navigation Satellite System (GNSS) of the International GNSS Services is presently the most commonly used products of the global ionosphere. It is very important to comprehensively analyze and evaluate the accuracy and reliability of the model for the reasonable use of this kind of ionospheric product. This work is different to the traditional performance evaluation of the global ionosphere model based on observation data of ground-based static reference stations. The preliminary evaluation and analysis of the the global ionospheric model was conducted with the dynamic observation data across different latitudes over the southern oceans. The validation results showed that the accuracy of the global ionospheric model over the southern oceans is about 5 TECu, which deviates from the measured ionospheric TEC by about -0.6 TECu.

  12. An accuracy-response time capacity assessment function that measures performance against standard parallel predictions.

    PubMed

    Townsend, James T; Altieri, Nicholas

    2012-07-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the workload capacity coefficient has been employed in many studies (Townsend & Nozawa, 1995). However, the contribution of correct versus incorrect responses has been unavailable in that context. A nonparametric statistic that is capable of simultaneously taking into account accuracy as well as RTs would be highly useful. This theoretical study develops such a tool for two important decisional stopping rules. Preliminary data from a simple visual identification study illustrate one potential application. PMID:22775497

  13. An assessment of accuracy, error, and conflict with support values from genome-scale phylogenetic data.

    PubMed

    Taylor, Derek J; Piel, William H

    2004-08-01

    Despite the importance of molecular phylogenetics, few of its assumptions have been tested with real data. It is commonly assumed that nonparametric bootstrap values are an underestimate of the actual support, Bayesian posterior probabilities are an overestimate of the actual support, and among-gene phylogenetic conflict is low. We directly tested these assumptions by using a well-supported yeast reference tree. We found that bootstrap values were not significantly different from accuracy. Bayesian support values were, however, significant overestimates of accuracy but still had low false-positive error rates (0% to 2.8%) at the highest values (>99%). Although we found evidence for a branch-length bias contributing to conflict, there was little evidence for widespread, strongly supported among-gene conflict from bootstraps. The results demonstrate that caution is warranted concerning conclusions of conflict based on the assumption of underestimation for support values in real data. PMID:15140947

  14. Experimental assessment of the accuracy of predicting attenuation-function moduli in the LF and MF ranges

    NASA Astrophysics Data System (ADS)

    Pertel, M. I.; Pylaev, A. A.; Shteinberg, A. A.

    The present study examines the feasibility and accuracy of predicting attenuation-function moduli in the LF and MF ranges of the radio spectrum for the example of a portion of the European region of the USSR which is flat but complex in the geoelectric respect and heavily populated. The proposed method for calculating the wave-propagation parameters and for compiling maps of geoelectric sections of the underlying surface has been verified experimentally, and prediction accuracies of 1-1.5 dB and 1.5-4 dB were achieved in the LF and MF ranges, respectively.

  15. Reproducibility and accuracy of body composition assessments in mice by dual energy x-ray absorptiometry and time domain nuclear magnetic resonance

    PubMed Central

    Halldorsdottir, Solveig; Carmody, Jill; Boozer, Carol N.; Leduc, Charles A.; Leibel, Rudolph L.

    2011-01-01

    Objective To assess the accuracy and reproducibility of dual-energy absorptiometry (DXA; PIXImus™) and time domain nuclear magnetic resonance (TD-NMR; Bruker Optics) for the measurement of body composition of lean and obese mice. Subjects and measurements Thirty lean and obese mice (body weight range 19–67 g) were studied. Coefficients of variation for repeated (x 4) DXA and NMR scans of mice were calculated to assess reproducibility. Accuracy was assessed by comparing DXA and NMR results of ten mice to chemical carcass analyses. Accuracy of the respective techniques was also assessed by comparing DXA and NMR results obtained with ground meat samples to chemical analyses. Repeated scans of 10–25 gram samples were performed to test the sensitivity of the DXA and NMR methods to variation in sample mass. Results In mice, DXA and NMR reproducibility measures were similar for fat tissue mass (FTM) (DXA coefficient of variation [CV]=2.3%; and NMR CV=2.8%) (P=0.47), while reproducibility of lean tissue mass (LTM) estimates were better for DXA (1.0%) than NMR (2.2%) (

    accuracy, in mice, DXA overestimated (vs chemical composition) LTM (+1.7 ± 1.3 g [SD], ~ 8%, P <0.001) as well as FTM (+2.0 ± 1.2 g, ~ 46%, P <0.001). NMR estimated LTM and FTM virtually identical to chemical composition analysis (LTM: −0.05 ± 0.5 g, ~0.2%, P =0.79) (FTM: +0.02 ± 0.7 g, ~15%, P =0.93). DXA and NMR-determined LTM and FTM measurements were highly correlated with the corresponding chemical analyses (r2=0.92 and r2=0.99 for DXA LTM and FTM, respectively; r2=0.99 and r2=0.99 for NMR LTM and FTM, respectively.) Sample mass did not affect accuracy in assessing chemical composition of small ground meat samples by either DXA or NMR. Conclusion DXA and NMR provide comparable levels of reproducibility in measurements of body composition lean and obese mice. While DXA and NMR measures are highly correlated with chemical analysis measures, DXA consistently overestimates LTM

  16. Accuracy assessment of high frequency 3D ultrasound for digital impression-taking of prepared teeth

    NASA Astrophysics Data System (ADS)

    Heger, Stefan; Vollborn, Thorsten; Tinschert, Joachim; Wolfart, Stefan; Radermacher, Klaus

    2013-03-01

    Silicone based impression-taking of prepared teeth followed by plaster casting is well-established but potentially less reliable, error-prone and inefficient, particularly in combination with emerging techniques like computer aided design and manufacturing (CAD/CAM) of dental prosthesis. Intra-oral optical scanners for digital impression-taking have been introduced but until now some drawbacks still exist. Because optical waves can hardly penetrate liquids or soft-tissues, sub-gingival preparations still need to be uncovered invasively prior to scanning. High frequency ultrasound (HFUS) based micro-scanning has been recently investigated as an alternative to optical intra-oral scanning. Ultrasound is less sensitive against oral fluids and in principal able to penetrate gingiva without invasively exposing of sub-gingival preparations. Nevertheless, spatial resolution as well as digitization accuracy of an ultrasound based micro-scanning system remains a critical parameter because the ultrasound wavelength in water-like media such as gingiva is typically smaller than that of optical waves. In this contribution, the in-vitro accuracy of ultrasound based micro-scanning for tooth geometry reconstruction is being investigated and compared to its extra-oral optical counterpart. In order to increase the spatial resolution of the system, 2nd harmonic frequencies from a mechanically driven focused single element transducer were separated and corresponding 3D surface models were calculated for both fundamentals and 2nd harmonics. Measurements on phantoms, model teeth and human teeth were carried out for evaluation of spatial resolution and surface detection accuracy. Comparison of optical and ultrasound digital impression taking indicate that, in terms of accuracy, ultrasound based tooth digitization can be an alternative for optical impression-taking.

  17. Future dedicated Venus-SGG flight mission: Accuracy assessment and performance analysis

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Hsu, Houtse; Zhong, Min; Yun, Meijuan

    2016-01-01

    This study concentrates principally on the systematic requirements analysis for the future dedicated Venus-SGG (spacecraft gravity gradiometry) flight mission in China in respect of the matching measurement accuracies of the spacecraft-based scientific instruments and the orbital parameters of the spacecraft. Firstly, we created and proved the single and combined analytical error models of the cumulative Venusian geoid height influenced by the gravity gradient error of the spacecraft-borne atom-interferometer gravity gradiometer (AIGG) and the orbital position error and orbital velocity error tracked by the deep space network (DSN) on the Earth station. Secondly, the ultra-high-precision spacecraft-borne AIGG is propitious to making a significant contribution to globally mapping the Venusian gravitational field and modeling the geoid with unprecedented accuracy and spatial resolution through weighing the advantages and disadvantages among the electrostatically suspended gravity gradiometer, the superconducting gravity gradiometer and the AIGG. Finally, the future dedicated Venus-SGG spacecraft had better adopt the optimal matching accuracy indices consisting of 3 × 10-13/s2 in gravity gradient, 10 m in orbital position and 8 × 10-4 m/s in orbital velocity and the preferred orbital parameters comprising an orbital altitude of 300 ± 50 km, an observation time of 60 months and a sampling interval of 1 s.

  18. A Comparative Analysis of Diagnostic Accuracy of Focused Assessment With Sonography for Trauma Performed by Emergency Medicine and Radiology Residents

    PubMed Central

    Zamani, Majid; Masoumi, Babak; Esmailian, Mehrdad; Habibi, Amin; Khazaei, Mehdi; Mohammadi Esfahani, Mohammad

    2015-01-01

    Background: Focused assessment with sonography in trauma (FAST) is a method for prompt detection of the abdominal free fluid in patients with abdominal trauma. Objectives: This study was conducted to compare the diagnostic accuracy of FAST performed by emergency medicine residents (EMR) and radiology residents (RRs) in detecting peritoneal free fluids. Patients and Methods: Patients triaged in the emergency department with blunt abdominal trauma, high energy trauma, and multiple traumas underwent a FAST examination by EMRs and RRs with the same techniques to obtain the standard views. Ultrasound findings for free fluid in peritoneal cavity for each patient (positive/negative) were compared with the results of computed tomography, operative exploration, or observation as the final outcome. Results: A total of 138 patients were included in the final analysis. Good diagnostic agreement was noted between the results of FAST scans performed by EMRs and RRs (κ = 0.701, P < 0.001), also between the results of EMRs-performed FAST and the final outcome (κ = 0.830, P < 0.0010), and finally between the results of RRs-performed FAST and final outcome (κ = 0.795, P < 0.001). No significant differences were noted between EMRs- and RRs-performed FASTs regarding sensitivity (84.6% vs 84.6%), specificity (98.4% vs 97.6%), positive predictive value (84.6% vs 84.6%), and negative predictive value (98.4% vs 98.4%). Conclusions: Trained EMRs like their fellow RRs have the ability to perform FAST scan with high diagnostic value in patients with blunt abdominal trauma. PMID:26756009

  19. Accuracy of methods for detecting an irregular pulse and suspected atrial fibrillation: A systematic review and meta-analysis

    PubMed Central

    Coleman, Tim; Lewis, Sarah; Heneghan, Carl; Jones, Matthew

    2015-01-01

    Background Pulse palpation has been recommended as the first step of screening to detect atrial fibrillation. We aimed to determine and compare the accuracy of different methods for detecting pulse irregularities caused by atrial fibrillation. Methods We systematically searched MEDLINE, EMBASE, CINAHL and LILACS until 16 March 2015. Two reviewers identified eligible studies, extracted data and appraised quality using the QUADAS-2 instrument. Meta-analysis, using the bivariate hierarchical random effects method, determined average operating points for sensitivities, specificities, positive and negative likelihood ratios (PLR, NLR); we constructed summary receiver operating characteristic plots. Results Twenty-one studies investigated 39 interventions (n = 15,129 pulse assessments) for detecting atrial fibrillation. Compared to 12-lead electrocardiography (ECG) diagnosed atrial fibrillation, blood pressure monitors (BPMs; seven interventions) and non-12-lead ECGs (20 interventions) had the greatest accuracy for detecting pulse irregularities attributable to atrial fibrillation (BPM: sensitivity 0.98 (95% confidence interval (CI) 0.92–1.00), specificity 0.92 (95% CI 0.88–0.95), PLR 12.1 (95% CI 8.2–17.8) and NLR 0.02 (95% CI 0.00–0.09); non-12-lead ECG: sensitivity 0.91 (95% CI 0.86–0.94), specificity 0.95 (95% CI 0.92–0.97), PLR 20.1 (95% CI 12–33.7), NLR 0.09 (95% CI 0.06–0.14)). There were similar findings for smartphone applications (six interventions) although these studies were small in size. The sensitivity and specificity of pulse palpation (six interventions) were 0.92 (95% CI 0.85–0.96) and 0.82 (95% CI 0.76–0.88), respectively (PLR 5.2 (95% CI 3.8–7.2), NLR 0.1 (95% CI 0.05–0.18)). Conclusions BPMs and non-12-lead ECG were most accurate for detecting pulse irregularities caused by atrial fibrillation; other technologies may therefore be pragmatic alternatives to pulse palpation for the first step of atrial fibrillation screening

  20. Assessment of User Home Location Geoinference Methods

    SciTech Connect

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.; Dowling, Chase P.; Cowell, Andrew J.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  1. Constraining OCT with Knowledge of Device Design Enables High Accuracy Hemodynamic Assessment of Endovascular Implants

    PubMed Central

    Brown, Jonathan; Lopes, Augusto C.; Kunio, Mie; Kolachalama, Vijaya B.; Edelman, Elazer R.

    2016-01-01

    Background Stacking cross-sectional intravascular images permits three-dimensional rendering of endovascular implants, yet introduces between-frame uncertainties that limit characterization of device placement and the hemodynamic microenvironment. In a porcine coronary stent model, we demonstrate enhanced OCT reconstruction with preservation of between-frame features through fusion with angiography and a priori knowledge of stent design. Methods and Results Strut positions were extracted from sequential OCT frames. Reconstruction with standard interpolation generated discontinuous stent structures. By computationally constraining interpolation to known stent skeletons fitted to 3D ‘clouds’ of OCT-Angio-derived struts, implant anatomy was resolved, accurately rendering features from implant diameter and curvature (n = 1 vessels, r2 = 0.91, 0.90, respectively) to individual strut-wall configurations (average displacement error ~15 μm). This framework facilitated hemodynamic simulation (n = 1 vessel), showing the critical importance of accurate anatomic rendering in characterizing both quantitative and basic qualitative flow patterns. Discontinuities with standard approaches systematically introduced noise and bias, poorly capturing regional flow effects. In contrast, the enhanced method preserved multi-scale (local strut to regional stent) flow interactions, demonstrating the impact of regional contexts in defining the hemodynamic consequence of local deployment errors. Conclusion Fusion of planar angiography and knowledge of device design permits enhanced OCT image analysis of in situ tissue-device interactions. Given emerging interests in simulation-derived hemodynamic assessment as surrogate measures of biological risk, such fused modalities offer a new window into patient-specific implant environments. PMID:26906566

  2. On the Spatial and Temporal Accuracy of Overset Grid Methods for Moving Body Problems

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    A study of numerical attributes peculiar to an overset grid approach to unsteady aerodynamics prediction is presented. Attention is focused on the effect of spatial error associated with interpolation of intergrid boundary conditions and temporal error associated with explicit update of intergrid boundary points on overall solution accuracy. A set of numerical experiments are used to verify whether or not the use of simple interpolation for intergrid boundary conditions degrades the formal accuracy of a conventional second-order flow solver, and to quantify the error associated with explicit updating of intergrid boundary points. Test conditions correspond to the transonic regime. The validity of the numerical results presented here are established by comparison with existing numerical results of documented accuracy, and by direct comparison with experimental results.

  3. Assessment methods for the evaluation of vitiligo.

    PubMed

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials. PMID:22416879

  4. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  5. An Accuracy Evaluation of Unstructured Node-Centred Finite Volume Methods

    NASA Technical Reports Server (NTRS)

    Svard, Magnus; Gong, Jing; Nordstrom, Jan

    2006-01-01

    Node-centred edge-based finite volume approximations are very common in computational fluid dynamics since they are assumed to run on structured, unstructured and even on mixed grids. We analyse the accuracy properties of both first and second derivative approximations and conclude that these schemes can not be used on arbitrary grids as is often assumed. For the Euler equations first-order accuracy can be obtained if care is taken when constructing the grid. For the Navier-Stokes equations, the grid restrictions are so severe that these finite volume schemes have little advantage over structured finite difference schemes. Our theoretical results are verified through extensive computations.

  6. Analysis of accuracy of digital elevation models created from captured data by digital photogrammetry method

    NASA Astrophysics Data System (ADS)

    Hudec, P.

    2011-12-01

    A digital elevation model (DEM) is an important part of many geoinformatic applications. For the creation of DEM, spatial data collected by geodetic measurements in the field, photogrammetric processing of aerial survey photographs, laser scanning and secondary sources (analogue maps) are used. It is very important from a user's point of view to know the vertical accuracy of a DEM. The article describes the verification of the vertical accuracy of a DEM for the region of Medzibodrožie, which was created using digital photogrammetry for the purposes of water resources management and modeling and resolving flood cases based on geodetic measurements in the field.

  7. PLÉIADES Project: Assessment of Georeferencing Accuracy, Image Quality, Pansharpening Performence and Dsm/dtm Quality

    NASA Astrophysics Data System (ADS)

    Topan, Hüseyin; Cam, Ali; Özendi, Mustafa; Oruç, Murat; Jacobsen, Karsten; Taşkanat, Talha

    2016-06-01

    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical

  8. Increased Throwing Accuracy Improves Children's Catching Performance in a Ball-Catching Task from the Movement Assessment Battery (MABC-2)

    PubMed Central

    Dirksen, Tim; De Lussanet, Marc H. E.; Zentgraf, Karen; Slupinski, Lena; Wagner, Heiko

    2016-01-01

    The Movement Assessment Battery for Children (MABC-2) is a functional test for identifying deficits in the motor performance of children. The test contains a ball-catching task that requires the children to catch a self-thrown ball with one hand. As the task can be executed with a variety of different catching strategies, it is assumed that the task success can also vary considerably. Even though it is not clear, whether the performance merely depends on the catching skills or also to some extent on the throwing skills, the MABC-2 takes into account only the movement outcome. Therefore, the purpose of the current study was to examine (1) to what extent the throwing accuracy has an effect on the children's catching performance and (2) to what extent the throwing accuracy influences their choice of catching strategy. In line with the test manual, the children's catching performance was quantified on basis of the number of correctly caught balls. The throwing accuracy and the catching strategy were quantified by applying a kinematic analysis on the ball's trajectory and the hand movements. Based on linear regression analyses, we then investigated the relation between throwing accuracy, catching performance and catching strategy. The results show that an increased throwing accuracy is significantly correlated with an increased catching performance. Moreover, a higher throwing accuracy is significantly correlated with a longer duration of the hand on the ball's parabola, which indicates that throwing the ball more accurately could enable the children to effectively reduce the requirements on temporal precision. As the children's catching performance and their choice of catching strategy in the ball-catching task of the MABC-2 are substantially determined by their throwing accuracy, the test evaluation should not be based on the movement outcome alone, but should also take into account the children's throwing performance. Our findings could be of particular value for the

  9. Accuracy of a hybrid finite-element method for solving a scattering Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Power, Joseph; Rawitscher, George

    2012-12-01

    This hybrid method [finite-element discrete variable representation (FE-DVR)], introduced by Resigno and McCurdy [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.62.032706 62, 032706 (2000)], uses Lagrange polynomials in each partition, rather than “hat” functions or Gaussian functions. These polynomials are discrete variable representation functions, and they are orthogonal under the Gauss-Lobatto quadrature discretization approximation. Accuracy analyses of this method are performed for the case of a one-dimensional Schrödinger equation with various types of local and nonlocal potentials for scattering boundary conditions. The accuracy is ascertained by a comparison with a spectral Chebyshev integral equation method, accurate to 1:10-11. For an accuracy of the phase shift of 1:10-8, the FE-DVR method is found to be 100 times faster than a sixth-order finite-difference method (Numerov), it is easy to program, and it can routinely achieve an accuracy of better than 1:10-8 for the numerical examples studied.

  10. Accuracy of a hybrid finite-element method for solving a scattering Schrödinger equation.

    PubMed

    Power, Joseph; Rawitscher, George

    2012-12-01

    This hybrid method [finite-element discrete variable representation (FE-DVR)], introduced by Resigno and McCurdy [Phys. Rev. A 62, 032706 (2000)], uses Lagrange polynomials in each partition, rather than "hat" functions or Gaussian functions. These polynomials are discrete variable representation functions, and they are orthogonal under the Gauss-Lobatto quadrature discretization approximation. Accuracy analyses of this method are performed for the case of a one-dimensional Schrödinger equation with various types of local and nonlocal potentials for scattering boundary conditions. The accuracy is ascertained by a comparison with a spectral Chebyshev integral equation method, accurate to 1:10⁻¹¹. For an accuracy of the phase shift of 1:10⁻⁸, the FE-DVR method is found to be 100 times faster than a sixth-order finite-difference method (Numerov), it is easy to program, and it can routinely achieve an accuracy of better than 1:10⁻⁸ for the numerical examples studied. PMID:23368078

  11. Methods of assessment of antiepileptic drugs.

    PubMed Central

    Milligan, N; Richens, A

    1981-01-01

    Epilepsy is a symptom with protean manifestations and as such it is a difficult disease in which to carry out a therapeutic trial. The methods available to research workers for the assessment of new antiepileptic drugs are hampered by the fact that epilepsy is a fluctuant condition. Although it is a chronic disorder open to study using cross-over trials and within-patient comparisons, accurate assessment cannot be easily made at any one point in time. Research workers are therefore automatically placed at a time factor disadvantage and this is especially so for those searching for quick methods of evaluating new compounds. The need for a quick and reliable method of assessing a new antiepileptic drug has long been appreciated. This article will discuss the methods currently available and we will begin by considering the most commonly used method of assessment with particular reference to some of the problems involved in conducting a controlled clinical trial in epilepsy. PMID:7272157

  12. 9. Assessments: structure, concepts, and methods.

    PubMed

    2014-05-01

    Assessments are an essential element of proper disaster management. Assessments help to define the damage and changes in functions at the time of the assessment. Assessments are transectional across the longitudinal phases of the disaster. Any intervention should be preceded by an assessment(s). The assessment process is deconstructed into a series of 10 steps: (1) need to know; (2) define the goal(s) and objectives(s) of an assessment; (3) select the appropriate indicators; (4) define the methods to be used for the assessment; (5) develop and test a plan for data collection; (6) train and brief data collectors; (7) gather (collect) the data; (8) synthesise the data and information collected; (9) output information for decision-making; and (10) compare findings with overarching goal and objectives. Steps 7-9 constitute a production process. Understanding this process is essential for identification of points of success and failure in achieving the desired assessment. Assessments require careful selection of indicators. The selected indicators are used throughout the process. Currently, no standardised set of indicators has been validated. Criteria for the composition of assessment teams are provided and common sources of error are discussed. Prior to, during, and following disasters, assessments are directed by the appropriate coordination and control entity. PMID:24785806

  13. Assessing the impacts of precipitation bias on distributed hydrologic model calibration and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Looper, Jonathan P.; Vieux, Baxter E.; Moreno, Maria A.

    2012-02-01

    SummaryPhysics-based distributed (PBD) hydrologic models predict runoff throughout a basin using the laws of conservation of mass and momentum, and benefit from more accurate and representative precipitation input. V flo™ is a gridded distributed hydrologic model that predicts runoff and continuously updates soil moisture. As a participating model in the second Distributed Model Intercomparison Project (DMIP2), V flo™ is applied to the Illinois and Blue River basins in Oklahoma. Model parameters are derived from geospatial data for initial setup, and then adjusted to reproduce the observed flow under continuous time-series simulations and on an event basis. Simulation results demonstrate that certain runoff events are governed by saturation excess processes, while in others, infiltration-rate excess processes dominate. Streamflow prediction accuracy is enhanced when multi-sensor precipitation estimates (MPE) are bias corrected through re-analysis of the MPE provided in the DMIP2 experiment, resulting in gauge-corrected precipitation estimates (GCPE). Model calibration identified a set of parameters that minimized objective functions for errors in runoff volume and instantaneous discharge. Simulated streamflow for the Blue and Illinois River basins, have Nash-Sutcliffe efficiency coefficients between 0.61 and 0.68, respectively, for the 1996-2002 period using GCPE. The streamflow prediction accuracy improves by 74% in terms of Nash-Sutcliffe efficiency when GCPE is used during the calibration period. Without model calibration, excellent agreement between hourly simulated and observed discharge is obtained for the Illinois, whereas in the Blue River, adjustment of parameters affecting both saturation and infiltration-rate excess processes were necessary. During the 1996-2002 period, GCPE input was more important than model calibration for the Blue River, while model calibration proved more important for the Illinois River. During the verification period (2002

  14. Assessment of Photogrammetric Mapping Accuracy Based on Variation Flying Altitude Using Unmanned Aerial Vehicle

    NASA Astrophysics Data System (ADS)

    Udin, W. S.; Ahmad, A.

    2014-02-01

    Photogrammetry is the earliest technique used to collect data for topographic mapping. The recent development in aerial photogrammetry is the used of large format digital aerial camera for producing topographic map. The aerial photograph can be in the form of metric or non-metric imagery. The cost of mapping using aerial photogrammetry is very expensive. In certain application, there is a need to map small area with limited budget. Due to the development of technology, small format aerial photogrammetry technology has been introduced and offers many advantages. Currently, digital map can be extracted from digital aerial imagery of small format camera mounted on light weight platform such as unmanned aerial vehicle (UAV). This study utilizes UAV system for large scale stream mapping. The first objective of this study is to investigate the use of light weight rotary-wing UAV for stream mapping based on different flying height. Aerial photograph were acquired at 60% forward lap and 30% sidelap specifications. Ground control points and check points were established using Total Station technique. The digital camera attached to the UAV was calibrated and the recovered camera calibration parameters were then used in the digital images processing. The second objective is to determine the accuracy of the photogrammetric output. In this study, the photogrammetric output such as stereomodel in three dimensional (3D), contour lines, digital elevation model (DEM) and orthophoto were produced from a small stream of 200m long and 10m width. The research output is evaluated for planimetry and vertical accuracy using root mean square error (RMSE). Based on the finding, sub-meter accuracy is achieved and the RMSE value decreases as the flying height increases. The difference is relatively small. Finally, this study shows that UAV is very useful platform for obtaining aerial photograph and subsequently used for photogrammetric mapping and other applications.

  15. Accuracy of plant specimen disease severity estimates: concepts, history, methods, ramifications and challenges for the future

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Knowledge of the extent of the symptoms of a plant disease, generally referred to as severity, is key to both fundamental and applied aspects of plant pathology. Most commonly, severity is obtained visually and the accuracy of each estimate (closeness to the actual value) by individual raters is par...

  16. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  17. Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  18. Accuracy of some routine method used in clinical chemistry as judged by isotope dilution-mass spectrometry

    SciTech Connect

    Bjoerkhem, I.; Bergman, A.; Falk, O.; Kallner, A.; Lantto, O.; Svensson, L.; Akerloef, E.; Blomstrand, R.

    1981-05-01

    Serum from patients was pooled, filtered, dispensed, and frozen. This pooled specimen was used for accuracy control in 64 participating laboratories in Sweden. Mean values (state-of-the-art values) were obtained for creatinine, cholesterol, glucose, urea, uric acid, and cortisol. These values were compared with values obtained with highly accurate reference methods based on isotope dilution-mass spectrometry. Differences were marked in the case of determination of creatinine and cortisol. Concerning the other components, the differences between the state-of-the-art value and the values obtained with the reference methods were negligible. Moreover, the glucose oxidase and the oxime methods for determination of glucose and urea were found to give significantly lower values than the hexokinase and urease methods, respectively. Researchers conclude that methods with a higher degree of accuracy are required for routine determination of creatinine and cortisol.

  19. Accuracy assessment of water vapour measurements from in-situ and remote sensing techniques during the DEMEVAP 2011 campaign

    NASA Astrophysics Data System (ADS)

    BOCK, Olivier; Bosser, Pierre; David, Leslie; Thom, Christian; Pelon, Jacques; Hoareau, Christophe; Keckhut, Philippe; Sarkissian, Alain; Pazmino, Andrea; Goutail, Florence; Legain, Dominique; Tzanos, Diane; Bourcy, Thomas; Poujol, Guillaume; Tournois, Guy

    2014-05-01

    The Development of Methodologies for Water Vapour Measurement (DEMEVAP) project aims at assessing and improving humidity sounding techniques and establishing a reference system based on the combination of Raman lidars, ground-based sensors and GPS. Such a system may be used for climate monitoring, radiosonde bias detection and correction, satellite measurement calibration/validation, and mm-level geodetic positioning with Global Navigation Satellite Systems. A field experiment was conducted in September-October 2011 at Observatoire de Haute Provence (OHP). Two Raman lidars (IGN mobile lidar and OHP NDACC lidar), a stellar spectrometer (SOPHIE), a differential absorption spectrometer (SAOZ), a sun photometer (AERONET), 5 GPS receivers and 4 types of radiosondes (Vaisala RS92, MODEM M2K2-DC and M10, and Meteolabor Snow-White) participated in the campaign. A total of 26 balloons with multiple radiosondes were flown during 16 clear nights. This paper presents preliminary findings from the analysis of all these datasets. Several classical Raman lidar calibration methods are evaluated which use either Vaisala RS92 measurements, point capacitive humidity measurements, or GPS integrated water vapour (IWV) measurements. A novel method proposed by Bosser et al. (2010) is also tested. It consists in calibrating the lidar measurements during the GPS data processing. The methods achieve a repeatability of 4-5 %. Changes in calibration factor of IGN Raman lidar are evidenced which are attributed to frequent optical re-alignments. When modelling and correcting the changes as a linear function of time, the precision of the calibration factors improves to 2-3 %. However, the variations in the calibration factor, and hence the absolute accuracy, between methods and types of reference data remain at the level of 7 %. The intercomparison of radiosonde measurements shows good agreement between RS92 and Snow-White measurements up to 12 km. An overall dry bias is found in the

  20. Accuracy assessment of water vapour measurements from in situ and remote sensing techniques during the DEMEVAP 2011 campaign at OHP

    NASA Astrophysics Data System (ADS)

    Bock, O.; Bosser, P.; Bourcy, T.; David, L.; Goutail, F.; Hoareau, C.; Keckhut, P.; Legain, D.; Pazmino, A.; Pelon, J.; Pipis, K.; Poujol, G.; Sarkissian, A.; Thom, C.; Tournois, G.; Tzanos, D.

    2013-10-01

    The Development of Methodologies for Water Vapour Measurement (DEMEVAP) project aims at assessing and improving humidity sounding techniques and establishing a reference system based on the combination of Raman lidars, ground-based sensors and GPS. Such a system may be used for climate monitoring, radiosonde bias detection and correction, satellite measurement calibration/validation, and mm-level geodetic positioning with Global Navigation Satellite Systems. A field experiment was conducted in September-October 2011 at Observat